At this point, everything is in flux.
This issue of Silk v Brief is a monument to the instability of definition. Political boundaries, ideologies, gender, borders, law – all will be questioned in the following pages.
Even our magazine, now in its sixteenth year, is still grappling with itself – I’m hoping that this issue will mark a sort of coming-of- age. We look different, we sound different, we are different. In the past, controversy and idiosyncrasy were quashed; we have embraced them. You, reader, are different too – before now, this publication was solely consumed by law students. No more.
That said, as the magazine of the UCL Law Society, everything you’ll read here is, in some way, framed by law. Some articles directly discuss legal issues; others are undivorceable from legal paradigms; others are drenched in the kind of legal thought law students are immersed in; others, still, directly question this thought pattern.
So whoever you are, dear reader, take heed. Proceed with caution – take nothing for granted; challenge that which you thought to be concrete.
SITTING ON A BROKEN FENCE
If you read the politics pages of most any national newspaper you will be aware of the crisis engulfing the British left: the grand schism that has opened up between the centre-left (moderate, rationalist, electorally-minded, third-way Europhiles) and a resurgent Hard Left, brought to the fore of British politics by the election of Jeremy Corbyn to the leadership of the Labour Party.
This narrative has been recycled so obsequiously by the dominant voices within our news media as to now be treated with untouchable certainty. However, rather than pointing to a genuine crisis in the identity of “the left” (of which the Labour Party has become—erroneously—a synecdoche), this narrative has instead been promoted to obscure the true crisis of British politics; the collapse of the political centre.
In the UK, the party of the centre—the Liberal Democrats—has seen its share of the vote decimated, tainted by its disastrous decision to form a coalition government with the Conservatives in 2010. Under Nick Clegg (and following Charles Kennedy’s centre-left repositioning of the party), the Lib Dems had drawn in significant numbers of left-leaning voters with its anti-war stance, support for civil liberties and, famously, the promise of no raise in university tuition fees.
The Lib Dems volte face on this latter position, and the shock realisation for the electorate that, yes, “liberal” has connotations in an economic context as well as a private one, have eradicated support for the party, which has dropped from around 25% in 2015 to somewhere under 10% now. And all this at a time when commentators claiming centre or centre-left sensibilities are calling out in desperation for a pro-European party of moderation and reason to come riding to the rescue.
That is not to criticise the efforts of many centrists to provide just that: new parties have been sprouting and disintegrating like well-funded fungus. The Electoral Commission reported that 50 new parties were registered in 2017, none have gained prominence. United for Change, a pro-European party endowed with £50m by LoveFilm founder, Simon Frank, found itself splitting before it had even officially launched.
As an issue, stopping Brexit possesses magnetic attraction for tenants of the centre-ground of British politics. But the failure of any new party to gain traction is testimony to the irreconcilable differences of outlook and opinion that exist outside of this one subject. Finding the mid-point between the left of the Tories and the right of the Labour Party does not mean finding a coherent set of principles on which to found a going political concern.
There is, of course, some commonality amongst centrists. In response to the latest global financial crisis, all three of the (then) major British political parties accepted that huge cuts to public spending were necessary to combat the budget deficit, which, we were told, was out of control. Bizarre and widely accepted analogies with tackling household debt were made in justification. The Exchequer slashed expenditure and rates of tax simultaneously, cutting its income twice and ballooning the national debt. The ensuing decade of austerity has failed spectacularly to dispense with the deficit, but has led to a simultaneous stagnation of wages and productivity unseen since before the industrial revolution.
It has been said many times before but deserves repeating: austerity is an economically illiterate set of policies that deliberately and punitively target the poorest. These policies have disproportionately affected people with disabilities, women and single parents, and the BMJ has linked their effects to the unnecessary deaths of 120,000 people.
These are the social policies that politicians and pundits from the centre have mandated as necessary for the last ten years. It was only three years ago that Harriet Harman, as interim leader of the Labour Party, whipped her MPs to abstain on cuts to Employment and Support Allowance, a benefit awarded to people too unwell to work. Chris Leslie’s thankfully short tenure as Harman’s Shadow Chancellor was marked, not by a coherent attack on the idiotic self-harm of austerity politics, but by an appeal to fiscal credibility premised upon supportingit.
Corbyn’s election to the leadership of the party has marked something of a break from this position. The party no longer seems intent on persuading the electorate that it can be just as brutal with cuts to public expenditure, and instead the task of dismantling the political consensus around austerity has begun.
The presentation—if not the implementation—of an explicitly anti-austerity position is what distinguishes both the British Labour Party and the Partido Socialista in Portugal from the rest of their social-democratic cousins across Europe. It is no coincidence that these two parties are the only European establishment centre-left parties to have avoided “Pasokification” over the last decade.
Those European parties of the centre-left that have clung to the neoliberal status quo have fared atrociously at general elections. This varies from country to country, but the trend is remarkably similar in each. The Swedish Social Democratic Party remains the country’s largest party despite a marked decline in electoral performances, while the Parti Socialiste of France and the Dutch Labour Party have seen a near fatal 80% drop in their share of the vote. Not only have these parties advocated and/or imposed austerity under the guise of fiscal necessity, they have also tended to back continuing market liberalisation.
In the UK this has regularly entailed the privatisation of state assets, often national infrastructure, which is subsequently leveraged to the hilt by private owners to pay exorbitant shareholder dividends. New owners act to load the business with massive debts, but do so safe in the knowledge that taxpayers will foot the bill when creditors finally get cold feet and the whole enterprise collapses. Our prisons, water, and rail each provide fine examples of this form of capitalism, while the expansion of Heathrow airport bears the hallmarks of a future state bailout.
Since the late 1970s we have witnessed the richest in our country hoovering up greater and greater quantities of wealth from other parts of society. The housing boom of the last 25 years typifies this, vesting vast amounts of capital in the hands of an ageing populace and prompting widespread intergenerational inequality. The sale of the nation’s social housing stock has helped to fuel this, but created a housing crisis in the process, driving up rents whilst property ownership becomes concentrated in fewer and fewer hands. In a wonderfully gross perversion of purpose, hundreds of millions of pounds of housing benefit is paid each year to private landlords to subsidise extortionate rents for properties that were, until recently, publicly owned. Who knew that the magic money tree is actually a two-bed former council flat in Clapton?
The response from the political centre to this state of affairs has been dire. Among centrists, there is a dearth of considered thinking on how the economy can be rebalanced, how people’s material conditions can be improved without risking the charge of being Anti-Business and Anti-Wealth-Creation. I applaud the reader who can name a single Lib Dem policy that is not #StopBrexit. Even greater credit should be awarded to those who can detail the policy platforms of Owen Smith, Liz Kendal, Yvette Cooper or Andy Burnham, all of whom have lost to Corbyn in leadership elections since 2015.
Instead of seeking to provide solutions or alternatives to the crumbling economic consensus, the typical centrist position has been to criticise deviation from it, portraying such thinking as archaic, juvenile and, crucially, inherently and dangerously autocratic. Bereft of any serious suggestions as to how the fall in living standards can be mitigated (let alone reversed), the centre has attempted to position itself as grounded instead on a rationalist morality; the landscape of British politics viewed through the lens of the fallacy of the golden mean.
It has been a feature of the persistent attempts to undermine the leftward surge of the Labour Party to portray the membership as naïve, bullying, Britain-hating and, most recently, racist. The Labour membership has been branded as an ideologically purist metropolitan elite: a dangerous, cosseted troupe of naïfs, more concerned with eradicating political plurality than attempting electability, and, most recently, responsible for reintroducing the sickness of anti-Semitic racism into British politics.
Having been on the receiving end of this for three years solidly, there are many within the left of the Labour Party who are understandably angry at such an unrelenting and negative portrayal. But, should their responses ever boil over into abuse, the established figures of the political centre are quick to cry out in despair, aghast at the desecration of our once civil discourse.
This rush for the moral high ground has generally come in place of substantive economic and social analysis. It is easier to make ethical pronouncements on the baleful intentions of the New Old Labour cult than it is to interrogate the causes of our current political malaise. The alt-centre looks on in disbelief as the geography of political discourse convulses and transforms. Its main proponents sigh in exasperation at the perceived naivety of their opponents. “This is not the way things are” they say, as what isslips further and further from the basic tenets underpinning their political reality.
In the face of these political crises, enlightenment ideals of rationality and reason have been pushed to the fore by liberal centrists, as if Trump, Orbán, Erdoğan et al. can be fact-checked out of office. There is a dismaying tendency within the political centre to see problems arising from our vastly complex and over-determined social sphere as having solutions which lie in the realm of the scientific. Stability is sought through a kind of benevolent technocracy; the fantasy cabinet would of course include both Mark Carney and Dr Brian Cox.
It is said that the country has never been more divided. If such a statement is true, this division should be addressed by attacking financial stratification and the entrenchment of social immobility. But rather than focus on the material conditions of the electorate, centrist-reformists have primarily sought unity by an appeal to the Nation.
Both of the major parties have been guilty of crass nativism—think of Mrs May’s Hostile Environment and Labour’s “Controls on Immigration” mugs. Each has also made studious appeals to the fabled white working class voter who, we are led to believe, lives somewhere north of the Watford Gap, has never seen a cappuccino, and is reactionary at a genetic level. As Joe Kennedy has forcefully argued, the shadow of this stereotype stretches across the whole of our electoral politics, acting as an excuse to eschew any kind of materialist analysis.
We are,of course, British. Our politics is meant to be conducted with a politeness and civility that takes in the views of the agora and spits out the median value. We must listento those with whom we disagree. And it is the caricature of the white working class voter that centrists (supposedly) disagree with. The deeply felt need to understand this ludicrously reductive and partial fabrication has helped promulgate an—again—supposedly pained tolerance of views that are morally reprehensible. Islamophobia isn’t racism, it’s a legitimate concern. Use of the nuclear deterrent isn’t genocidal, it’s the duty of any self-respecting British Prime Minister.
In the face of a Brexit vote which orbited around the issue of immigration, liberals took an illiberal turn. According to some, including former deputy prime minister Nick Clegg, the four freedoms that comprise the Single Market are in fact only three and a bit. On this view, free movement of persons should be subject to an “emergency brake”, while capital, services, and goods flow freely. Earlier this year, Emmanuel “Manu” Macron, the President of France and poster boy for the UK’s “politically homeless”, introduced proposals to criminalise illegal border crossings. There is no shortage of further examples, drawn from Western liberal democracies, of centre-right and -left politicians instrumentalising immigration to shore up their waning support.
This points to a deeply worrying trend in our politics. In lieu of offering an alternative to the precariousness of our economic situation, centrists have settled into an increasingly nativist pose. While this should be of concern to anyone holding remotely progressive values, it should not come as a surprise. In times of economic crisis, and faced with a resurgent left, the centre has, historically, tended to find common cause with the right. The interests of capital are prioritised above the safeguarding of liberties. The EU, that bastion of liberalism, has shown itself to be perfectly content with increasing conditionality on citizenship rights while fortifying its borders. It has also proven itself to be utterly impotent in combatting the rapid growth of far-right parties in almost every one of its Member States.
Disillusionment with globalisation (viz. neoliberalism) has spread into core Western economies—the very countries that have spent the last four decades reaping its benefits. In his unbridled wisdom, President Trump has fired the protectionist starting gun and signalled a shift away from solidarity amongst capitalist economies. The next economic crisis will not be weathered like the last.
If centrists truly believe in the merits of the middle path between two opposing positions, then as a concept “centrism” can only have meaning in the context of the political economy of the day. The centre will always exist in some form or another, but its future boundaries will be determined by whichever ideology gains hegemony in the coming years. For the liberals and social democrats of the current political centre, there is an increasingly stark choice to be made between economic democratisation and renewal on the one hand, and a regressive nationalism on the other. Given historical precedent, I would hope that this is not a difficult choice to make.
CAPITALISM IS ETERNAL LIFE
Neo Wei Sheng
Capitalism allows culture to flourish by giving people agency and giving them choice. For many people, wealth is meaningless unless they are able to express their personalities and their desires through that wealth. Capitalism gives people a democratic say in what their culture is and helps them to develop their identities in a way that is personally meaningful to them.
Capitalism is belonging. Under capitalism, businesses promote themselves by tying their products to a certain lifestyle. This why perfume adverts rarely make any claims regarding the perfume itself, but rather show you a woman jumping into a hot air balloon with a handsome man and flying away. And since capitalism requires that businesses differentiate themselves from their competitors in order to succeed, the lifestyle advertised by each business will be different. To choose a brand or to buy into a certain business is buy into its culture, and to join others who have made the same choice. When you buy from Vans, you aren’t simply buying a pair of shoes—you’re buying your entry pass into skateboarding culture with its counterculture spirit, casual but cool.
Hippies are immediately identifiable to each other by their choice of dress, and hip hop sneakerheads who rush to buy Yeezys do so as a badge of their identity. Owning certain objects allows you to feel like you are part of a wider movement; to feel like you belong.
Capitalism is gothic Lolita otaku fangirls. The niche subcultures which allow people to explore their own identities can only exist where they are given space to grow organically. Subcultures are often too varied and unpredictable for a single organisation to understand fully, and under capitalism they have grown too numerous for any single planner to keep track of. The decentralised structure of capitalism allows us to accommodate them, and that matters, because for the people within them, their subculture is integral to their identity and their sense of self.
Think of the hikikomori who devotes his entire salary to his collection of Gundam memorabilia, until his room is plastered wall to wall with the idols of his devotion. For him, those objects are worth far more than good food or a better standard of living. And, even if we cannot empathise with his motives, we know that he would be devastated if his culture were to be taken away. Under capitalism, supply is highly reactive to demand, and as long as there is demand for gothic Lolita cosplay, small businesses will quickly enter the market and turn a profit from providing niche goods. Importantly, subcultures emerge through subtle changes in cultural norms over time; the members of subcultures are eased into them, and only capitalism has the ability to evolve at the same pace by responding to market incentives— neither too fast nor too slow.
Capitalism is eternal life. Under capitalism, each and every cultural instrument, no matter how foreign, has the potential to be plucked from obscurity— to be reborn, recombined, repackaged, and resold. The renowned chef who studies the family recipe of a lonely goat herder in Bhutan knows better than most that capitalism’s endless impulse for competition allows our myriad traditions to be included in the common stock of humanity.
To succeed under capitalism is to be better, to be different, to take something old and make it new. The fashion designer who visits the favelas of Brazil will take inspiration from the designs woven into the patchwork rags of young children along the streets; those designs will be considered and incorporated into the streetwear of thousands, where they will inspire yet someone else.
Under any other system, the traditional cultures of small communities will give way to monotony, and the past will crumble to ash.
It is only under capitalism that cultural instruments can survive contact with globalism, by evolving and assimilating into different contexts under different terms. In that way, cultural instruments can never truly die.
Those who hold capitalism responsible for gentrification or any other presumed harbingers of cultural destruction forget that under a centralised model, the communities which have developed their own unique way of life will nonetheless be swallowed up and made to conform—not through any use of force, but through the removal of essential tools which allow culture to thrive. For example, a centralised model will not explicitly demand that the gothic Lolitas cease their actions, but it will no longer be able to provide them with the tools and the goods they need to keep their culture alive.
Under capitalism, the cultures which are under threat are given a chance to live on, by being adopted by everyone else. The Japanese violinists who put on beautiful renditions of Mozart, the companies that improve their workplace by implementing aspects of hacker culture, the white rappers of the world— these are all testament to how the normalisation of niche cultures can be a boon to humanity. There are worse ways to go than that.
CITIZENS OF THE WORLD
Sophia Eugercios Stepanova
In a now infamous 2016 speech, Theresa May said: “If you believe you are a citizen of the world, you are a citizen of nowhere”. Far-right parties keep gaining followers around the globe by echoing this sentiment. Is there some truth to it? Are we losing our national identity to foreign influence? And is the political establishment giving too much decision power to other countries in the name of diplomacy and globalization?
Trade liberalization and immigration have changed the way people perceive the world regardless of their location: we consume much of the same food; wear many of the same clothes; and even sing the same songs. It is only natural to fear and reject this loss of local culture to massive foreign brands and influences: to clutch our flags and search for singularity. Far-right politicians seem to fill this niche across the age gap: according to Ipsos, Marine Le Pen won 34% of the vote among 18 to 24-year-olds in the 2017 French election. Electoral records show that, in similar fashion, 34% of voters aged between 18 and 34 voted for the controversial Centre-Right Coalition in Italy. They are not alone in this trend: from the Philippines’ PDP-Laban to Brazil’s Social Liberal Party, analogous right-wing parties keep appearing worldwide.
Populist groups on both sides of the political spectrum represent the biggest challenge to current administrations: citizens who no longer trust the establishment. However, in terms of electoral gains, the balance has tipped in favor of far-right populism. These factions have earned their ground by shifting the blame of the common fears of large masses of voters not just on global powers or past politicians, but on foreigners. They crown their successful strategy by founding their party identity on nationality: Hungary for the Hungarians, France for the French, America for the Americans; the same sentiment echoed time and time again. Globalization, and its byproducts, are to be viewed as a threat.
But why reject Globalization if we risk losing its long-term positive effects? Not only has this phenomenon provided consumers with lower prices and businesses with wider markets, it has helped accelerate technological advancements and reduce worldwide poverty. The percentage of the world population living in extreme poverty fell from 43% in 1990 to 14% in 2015, and a big contributor to these results has been the United Nations, by setting the 2030 Agenda and its predecessor program, the Millennium Development Goals. The actions of supranational institutions like the UN or the Organization of American States have boosted initiatives with positive impacts that single countries could hardly achieve by themselves.
Economic liberalism on a global scale has not only encouraged the apparition of social policies that can mitigate poverty, it has aided developing countries in finding investment to grow their economies and build the logistic and industrial structures needed to offer employment to their rapid-growing workforces. Chinese investments in Africa have unleashed the economic potential of many nations, through arrangements such as the Forum on China-Africa Cooperation, and projects like the construction of Porto do Caio in the poorest regions of Angola - experts believe it will provide 30.000 indirect jobs and serve as the most important commercial hub for Congo and the Democratic Republic of Congo.
The issue, however, that makes globalization undesirable is its relationship with the rise in income inequality around the world. As stated by the Organization for Economic Co-operation and Development secretary-general, Ángel Gurría: “The average income of the richest 10% of the population is now around 10 times that of the poorest 10% across the OECD”. Whether through devaluating wages in low-skilled professions, inhumane working conditions or through job outsourcing, voters have noticed this change. And if no existing parties can address these issues, new ones capable of doing so appear. In this fashion, the ongoing wave of right-wing nationalism is a blunt aversion to globalism.
It’s a challenging task to undo globalization, not only because it would mean abandoning the benefits we just looked at - its reach has permeated every aspect of our lives, from the agricultural policy system that subsidizes every European farmer and keeps prices stable, to including foreign languages in public spaces. Globalization has laid the foundations to best remedy its own shortcomings by creating forums and democratic organizations that can regulate its progress. Isolating each country from the rest of the global community may not withdraw it from the problems that affect the entire world: the ozone layer will keep chipping away; migrants will keep trying to jump the border. By voicing the views of different states, we could reach a more balanced reform of the policies of supranational organizations, and of the current sources of international public law, to draft a pathway towards equality and restrict dependence on foreign direct investment. Income inequality could be better solved through international efforts, such as investments in widening access to education and creating better infrastructure in disadvantaged countries, or regional cooperation in matters like technological development.
To reject open borders and commerce may not be a successful way to reclaim local customs either. Immigration has ingrained itself into society: who is to say what is the purest form of being British? Britain’s identity is English, Welsh, Scottish, and Irish as much as it is Muslim, Indian, Polish and overall, European. But the Muslim identity and the Indian identity are also British, Chinese, European and more. Whether it is because they are residents here, because they live in tourist or expat-influenced areas, or because of the products and entertainment they consume, they are ingrained in our culture. Human heritage goes beyond borders, and it evolves through different influences. Bath is the product of Roman immigrants as much as chicken tikka masala is Bangladeshi and Indian.
Far-right parties tend to blame foreigners for the loss of local culture and see the coexistence of several cultures as a source of violence, lawlessness, and dissatisfaction. To quote Ms. Le Pen: “Multicultural societies are multi-conflict societies.” There is little correlation between crime and immigration, a New York Times study found that the 10 cities with the largest influx of migrants in the USA had lower levels of crime in 2016 than in 1980. In Italy, where even older candidates like Berlusconi vowed to issue mass removals of aliens, there was a 65% decrease in the regional rates of crimes committed by foreigners between 2007 and 2016 according to state-provided data analysed by LSE experts.
Immigration keeps the population stable in developed states as it balances the excessive mortality rates and aids in meeting labour demands. In the UK, NIESR research concluded that a 1% increment in the number of immigrants in the UK’s workforce lead to an 0.06% rise in productivity. Thirty-five percent of the world’s migrants relocated to middle and low-income nations, somewhat easing their lack of high-skilled workers and increasing commerce and investment connections to the rest of the world. Refugees have had positive impacts in developing African nations like Uganda or Tanzania because they led to job opportunities for locals, by promoting trade relationships, and creating better communication infrastructures inside their territories, challenging the myth of the inactive refugee whose only income is welfare. Migration is good for the economy, especially with the proper government policy.
Far-right nationalists in refugee-recipient countries have presented the current crisis in Europe as an invasion more than an urgent humanitarian issue. It’s hypocritical to view individuals running away from conflict and misery as aggressors considering most countries have produced diasporas. In Britain’s case, due to the onset of the industrial revolution, 800.000 citizens fled their bad living conditions and settled in Canada. Between 1845 and 1849, 1.5 million Irishmen left their Island as refugees from the Great Potato Famine. During WWII, about 14.000 children were temporarily evacuated overseas to both America and Canada. Every country has logistical limits on how many evacuees it can accept, it’s fair to critique whether their distribution has been handled well but threatening with mass deportations and rallying against these people as if they were nomad criminals is enforcing a double standard.
Immigration is not a threat, it’s the natural evolution of society at play. Due to the once outrageous accents of conquered Roman people often labeled as barbaric tribesmen, Latin evolved into Spanish, French, Italian and Romanian. Is that a bad thing? We never lost Latin, we hold complete knowledge of its grammatical rules and lexicon, and to this day, it is present in everything from coffee shop slogans to the periodic table. Are we citizens of nowhere? No. Even if we retracted ourselves to our borders and rejected global citizenship, challenges that no country can face alone would affect us. We are different national identities that have changed names and borders over centuries, nationality is mutable, not a substitute to identity, but part of it.
FRINGE BRITISHNESS: A COMMONWEALTH CITIZENS' TALE
Alex Satoru Cheah
One of the biggest surprises of 2017 for me was finding out that I was eligible to vote in UK elections. As a Commonwealth citizen who had never lived in the UK until commencing my studies at UCL, and being hardly up-to-date with the political scene and voting system, I was intrigued that I had been bestowed the right to elect the leaders and policy makers of a country proud of its national identity and heritage.
One of the core principles behind the Leave campaign concerned the UK “taking back control” from Brussels and being free to dictate its own laws and its own future through the UK Parliament. The Leave campaign’s victory seemed to indicate a nationalistic trend within the UK, determined to eliminate foreign control of the country. Despite this, there have been no noteworthy protests against Commonwealth citizens residing in the UK having the right to participate in UK elections, and for permanent residents even having the right to become a Member-of- Parliament (MP) themselves.
These privileges are not reciprocated in many Commonwealth states such as Canada and India – a British citizen resident in these states will not be able to vote in national elections, let alone become an MP, under any circumstance – which makes the UK’s decision all the more bizarre. I was also drawn to the fact that Commonwealth citizens are not considered foreigners in the UK – this is reflected by what is commonly known as the “Foreign Office” (or similar) in other countries being known as the “Foreign and Commonwealth Office” in the UK, reflecting the distinct and unique status of Commonwealth citizens by the UK Government.
This had led to me to ponder upon the concept of British Citizenship and the British identity, and what it means to be a citizen of a state.
The history of the Commonwealth dates back to 1926 with the decolonisation of the former British Empire, rebranded the British Commonwealth of Nations, and subsequently the Commonwealth of Nations - with its member states “free and equal” - in 1949. The Commonwealth today reflects a shared history and a shared culture, with common features such as the common law, the Westminster system of Parliamentary Democracy, and the use of the English language, with member states having no legal obligations to each other.
As its member states were former members of the now-defunct British Empire, Commonwealth citizens do indeed possess some elements of British culture and identity, though it is debatable as to the extent that they do so, to justify a formally recognised identity status by virtue of the various rights conferred upon them, which I coin “Fringe Britishness”. I opine that the current liberal stance of the UK towards Commonwealth citizens’ rights and identity is not merely a relic of its colonial past, but reflects the UK’s decision to take a radical step away from a restrictive and functional communitarian concept of statehood, instead embracing a larger, more encompassing idea of identity and governance.
Most nations restrict the electorate to their own citizens, which enforces a united, communitarian spirit within them. It also confers a sense of legitimacy to the governing body, as they would have been chosen by the people who they have been empowered to govern. Why then, does the UK grant Commonwealth citizens this special privilege to vote, even though it may threaten to weaken the British communitarian spirit, and may threaten the legitimacy of the government?
I postulate that what makes Commonwealth citizens special and deserving of such special rights is that the UK sees Commonwealth citizens, by virtue of their shared culture, beliefs, and history, as being part of the collective British identity. The forefathers of Commonwealth citizens - British subjects who worked within the British Empire - were members of British society and thus had a role in shaping the UK’s uncodified constitution into the way it is today.
The voices of Commonwealth citizens thus form part of the will of the people, and, therefore, share in the British identity (albeit in a subtler way than ordinary citizens). In contrast, non-British EU citizens do not share the same culture, beliefs, or history as citizens of the UK. They played no part in moulding British society, and therefore have no claim to any form of British identity. As such, they are rightfully treated as foreigners, with no claims to any part of British governance.
Needless to say, the much lesser degree of control that Commonwealth citizens have over UK policies and law as compared to that of the EU would form a strong alternative reason for the apparent acceptability of the status quo. Furthermore, there may be insufficient awareness of the rights that Commonwealth citizens resident in the UK possess amongst both Commonwealth citizens and the British populace, which might account for the lack of discourse on this matter. Nevertheless, the maintenance of a system allowing for Commonwealth citizens to possess such rights, with no reciprocal benefits attached to it, suggests that it is not with any apparent incentive nor with reluctant acquiescence that the UK Government maintains this system, but instead, it is the case that the UK Government believes that Commonwealth citizens resident in the UK do have a place in the collective British identity, and therefore make up part of the electorate.
As a Commonwealth citizen, this revelation had made me more acutely aware of my own identity and the unique position I hold during my residence here in the UK. Having spent a large part of my life in Singapore, an environment brimming with the vestiges of British colonial influence, I have gained a better appreciation of the “Fringe Britishness” that forms an inseparable part of my identity, which is immediately observable by comparing myself with citizens of non-Commonwealth states with an otherwise similar cultural background, such as China or Korea. I have also explored the way the UK views the concept of citizenship and identity, and developed a renewed sense of appreciation for the opportunity to study here, progressing in my personal quest for development and self- discovery.
MYTHS AND DELUSIONS
Direct democracy is a myth. Developed by the ancient Athenian political system, the idea of “pure”, “direct”, or “classical” democracy is as unreachable as it is intangible in modern society. The UK’s system of government is more often labelled as a “representative” democracy, meaning that although the public do not (usually) have direct say over important political decisions, they can elect individuals to represent them in doing so. In order to be a representative democracy then, the legislature have to be in some way “representative” of the general public. In the UK this is not the case. Between the bicameral nature of parliament (whereby the House of Commons are elected, but the House of lords are not), the questionable outcomes of the first-past-the-post voting system and the unelected nature of judges, the UK presents itself more as a façade of democracy than a real one. And yet, a bigger question than whether the UK can really identify as a democratic society is whether it necessarily ought to.
Within a democratic society, it is important for power to be shared between the public and governing bodies, and that no single entity should possess either sole control or authority. The “rule of majority” should be seen to govern most, if not all, issues of importance for the country (such as welfare, the environment, national security, and domestic policy). Yet, often in the English legal system, unelected judges are asked to make decisions regarding these types of issues - not on any “rule of majority” - but instead on their own independent interpretation of parliamentary law.
Although judges do not pass legislation and thus do not directly regulate the behaviour of society, many accept the words of Lord Reid that “for better or for worse, judges do make law”. As such, the decisions made in the country seem to be skewed beyond “representative democracy”, as judges do not seem to accurately “represent” the public at all. The judicial diversity statistics of 2017 show that only 7% of court judges in the UK identify as Black, Asian or minority ethnic, whilst a mere 28% are female. For magistrates this is not much better: although 54% are recorded as being female, only 1% are under 30, and 86% are over 50. These statistics suggest that judges in courts are unable to reflect the UK population in a “representative” way, since certain groups (specifically those from BAME background and under 50’s) are being widely under-represented within the profession.
This is an issue for our fundamental ideas of democracy. If the legal professionals who shape the progression of our society do not reasonably represent majority attitudes, then the laws being made cannot in good faith be labelled as “democratic”. The JENGbA campaign supports this point of view, stating that the disproportionate representation of the black and mixed communities sentenced under joint enterprise laws points to the unfair, unjust and discriminatory attitudes towards the working class and BAME communities. It is suggested that unconscious bias in the laws passed by primarily white, middle aged, and middle-class judges results in widespread miscarriages of justice, presenting the law not only as unrepresentative, but also discriminatory.
A person defending the democratic integrity of our government may assert that although judges do decide cases, and even “make law” in doing so, these decisions are ultimately policed by a democratically elected parliament. Parliament can overturn any judge-made decision in the courts, seemingly restoring the legal system’s right to assert itself as a “representative democracy”. However, this power is rarely used, and courts almost never investigate, rendering it irrelevant. The decision in R v R(1991) which made it illegal for a husband to rape his wife was passed less than thirty years ago, meaning that for 250 years prior, this outdated judgement held authority, and precedent, in common law. The reality is that the time of parliament is scarce and not spent on reviewing and overturning judge- made decisions, meaning that in practice “undemocractic” common law holds as much authority as “democratic” parliament- made legislation.
Not only that, but we are also able to question the validity of the “democratic” legislation drafted by elected MPs, by showing that their claim to being a “representative” body of the public is false.
The first-past-the-post voting system, in which the public elects MPs into parliament, does not allow the majority to truly be represented. In the 2017 general election, only 11,623 out of 39,767 voters within the constituency of Ceredigion voted for Plaid Cymru, who consequently took the seat in the House of Commons, allegedly “representing” the entire constituency. Asserting that a representative democracy requires only 29.2% of the entire region’s vote share is ludicrous, but more importantly is detrimental to our very understanding of the concept of democracy. Therefore, the image of parliament as democratically elected may, in fact, fall short of the reality, leading to the question of how the UK could feasibly be labelled a democracy at all.
It is more fitting to understand the UK system of government as something entirely opposite to a democracy; a dictatorship. Specifically, an elected dictatorship. We, the public, elect the House of Commons- thereby electing the individuals who are to make and pass all legislation for the UK until the next election date. The individuals who make up parliament may therefore enact any laws or policies they please and enjoy almost absolute constitutional authority – which the doctrine of parliamentary sovereignty ensures.
How the UK’s system of government could therefore be termed in any way a “democracy”, seeing as the public only have power to elect the body to be this ultimate authority, seems ill- considered.
However, we should rejoice at this fact rather than feel disheartened – at least we get to choose our poison, and the fact is that democracies simply do not work in modern society. Our “representative democracy” cannot guarantee any kind of representation.
Strands of direct democracy where the majority could truly “rule” fix the problems of representative democracy by holding regular referendums but have issues. These include implementation (due to the time and cost of several annual referendum votes), as well as negatively effecting voter turnout due to the influx of electoral voting opportunities inspiring voter apathy. The most significant issue with popular democracy however may be the impossibility of ensuring the public are suitably well equipped to “directly” make decisions about the social, economic and political wellbeing of their country. Indeed, the aftermath of the infamous EU referendum may be enough to demonstrate how the sheer complexity of legal and political issues means that they categorically should not be decided on the basis of public opinion alone, providing an example of why the UK ought not to identify as a democracy.
It is a fact that very few of us are considerably well-versed in economics, history and world-wide politics. As a result, the public as a whole cannot have a firm grasp on the often complex and unpredictable nature of implications that can arise from a single decision such as leaving the European Union. Indeed, Richard Dawkins stated on the EU referendum that: “We live in a representative democracy not a plebiscite democracy... this should be a matter for parliament.”
Here lies the problem: the public are often- times mislead and misinformed about the implications of actions and ways of voting, such as the claim that the money the UK sends to the EU (allegedly £700 million a fortnight) would, on leaving, instead be put into our own national health system. This promise was later named a “mistake” by political leader Nigel Farage. Due to some of the motivations for people voting “leave” seeming to stem from a wide range of separate concerns (such as immigration, healthcare, and living expenses) it is easy to see how claims such as this one may well have had significant influence over individual voters. Campaigners for the remain side of the debate also indulged in false claims and overexaggerated promises: George Osborne’s predicted “emergency budget” that would have to follow a Brexit vote in order to “restore stability to public finances” never came to pass.
The real question, then, is why we continue to push the narrative that we are a democracy when our efforts to exercise any democratic rights are consistently overlooked and dismissed by the people in power. Petitions are a prime example of this. Whilst petition.parliament.uk states “at 10,000 signatures” the government will “respond”, these responses often merely reaffirm parliaments prior position on the matter; meaning petitioning achieves no reform, and barely any consideration.
Despite the Greenpeace petition against Hinkley Point C gathering over 270,000 signatures, and the project being publicly condemned by the likes of The Economist in its claim that it looked “extraordinarily bad value for money”, plans to build the Nuclear power plant do not seem to have halted. We willingly accept that we live in a democracy as we all wish for our voices to be heard, and our values appreciated. However, we should be more vigilant and consider whether this is simply an idea we are being peddled by political bodies who wish to keep their power and authority.
To conclude, the concept of democracy seems to be insufficient to categorise the UK. Our “democratic system” does not deliver on its promises of equal power, control, and rights between citizens and is simply used as a device for politicians to hide behind in order to keep the general public feeling as though they are not ultimately completely under the control of political bodies.
I would argue that the biggest challenge a system like democracy poses is its inability to truly be implemented within society, and the fact it is often discussed as though it can, and moreover that it currently is implemented in the UK, hinders our ability to objectively assess both our own, and parliament’s, actions. To admit that we willingly elect our dictators can be a tough pill to swallow, and yet after considering the nature of parliament as unrepresentative, it’s increasingly hard to deny.
Power is a mysterious force that can promote peace or cause conflict; it’s a force that humans thrive for and desire. Acquiring power requires exploiting personal identity, which is one of the concepts individuals value the most. Moreover, a significant part of our personal identity consists of national identity. National identity is a sense of belonging that derives from a country’s culture, language, values, and beliefs. Nationalism, however, is national identity taken to the extreme, where individuals hold a patriotic feeling of superiority over other nationalities. This article argues that while national identity can have both social and economic contributions to a society, nationalism lacks balance and can disrupt peace.
One of the most significant characteristics of humans is the eagerness to form groups in order to create economically and socially balanced liveable spaces. However, communities are not just formed for the sake of livelihood. The need for moral support is equally in demand. In fact, the need for love and belonging is a key component of Maslow’s famous hierarchy of needs. Within the broader sociocultural context, individuals are social animals who unify and conform to group norms (Asch, 1956). National identity plays a crucial part in our world as it establishes unanimity and creates a sense of belonging between citizens of a certain country.
According to research by Michael Kalin and Niloufer Siddiqui, this sense of inclusion in society is helpful in overcoming prejudice. The aim of their research was to see whether “highlighting the national contributions of a religiously marginalised group increased tolerance towards that group, even when its presence in the nation-state is disputed” (Kalin and Siddiqui, 2017). The researchers conducted surveys by informing the subjects that Pakistan’s first and only Nobel prize winner, who raised Pakistan’s national status, is a member of the Ahmedi community. In the end, it was concluded that those who enjoy a higher status in society and who are more likely to identify with the nation were more tolerant towards the discriminated Ahmedi group and tended to give unbiased responses. Hence, it is suggested that the sense of loyalty national identity brings can benign the society by creating a more supportive atmosphere for the sake of national and communal success.
Furthermore, as the world globalises, tourism and business increase day by day. In a diverse city like London, people are accustomed to living with their differences and are becoming ‘one’ in order to work together effectively. Hence, it becomes harder to hold on to national identity and its essentiality can be questioned. By contrast, homogenous countries tend to manifest a stronger national identity which has been shown to improve their socio-economic prospects to a certain extent. When citizens of smaller countries, like Denmark, form a national bond, it becomes easier to assemble institutions that are motivated to cooperate, coordinate, and sacrifice for the sake of national interest (Campbell and Hall, 2009). This cooperative culture makes them more resistant to cope with their vulnerability, spur economically and establish a certain respect within their community.
Nevertheless, ideologies require balance, and national identity can have a negative reaction when it is in its extreme form. This, known as intense nationalism, can lead to dangerous outcomes such as radicalism which hinders our societies deeply. Moreover, the main ideology of rising nationalist groups is manipulating history and origins as a way to acquire a political power which can lead to discrimination, racism, and even to violence. For instance, “Alternative for Germany” is a German nationalist party with a racist, Islamophobic, anti-immigrant, anti-Semitic, xenophobic, white nationalist, and neo-Nazi ideology (Decker, 2016). Their aim is to use nationalism and German history as a tool to attract and persuade the public, make them victims of political deception, and justify arbitrary actions.
This argument is nowhere near suggesting that all nationalist groups and ideologies create terror and do harm to the public. But it does claim that just as if precedent lays down a framework for the future rule of law in the UK, intense nationalist violence throughout history should always be a caution for the future. Under such fanaticism lies ignorance that offends people’s religion, race, and more importantly, identity. Fanaticism, in any context such as sport, politics, religion, philosophy, and business can, in fact, disrupt peace and order since, to a certain extent, it damages our ability to think critically, empathise, and respect. In short, it damages our humanity.
At this point, it is important to inquire whether fanaticism is a part of human nature or the result of an environmental stimulus such as social comparison. The necessity to form bonds is a part of human nature and has biological correlates (Hinde, 1975). Hence, the demand to commit to those bonds can derive from instinct. On the other hand, Festinger has proposed the Social Comparison Theory and hypothesised that “to the extent that objective and non-social means are not available, people evaluate their opinions and abilities by comparison respectively with the opinions and abilities of others.” (Festinger, 1954). This implies that fanatic and nationalist views can easily spread within a community when individuals compare themselves and comply with a common ideology in order feel strong and have pride. Both of these explanations are valid reasons to investigate the origin of this issue and take necessary actions.
Ultimately, national identity and nationalism are two sensitive concepts that have governed our lives for centuries. Withal, national identity is a part of personal identity which encourages a positive emotion of love for one’s country and has socio- economic benefits such as helping to overcome prejudice and respond to economic challenges through collaboration. Yet it differs from nationalism in a way that there are underlying dangers involved with nationalism - it is the perfect tool to disregard justice and manipulate individuals. Unfortunately, one step further down the line can lead to abuse of power which will result in the further disruption of the scale of peace, equality, and justice.
MOVEMENTS OF NATIONAL IDENTITY
Nationalism has always been a source of conflict in Europe; there are an estimated 25 separatist movements today. This piece explores the paths a state which used to be part of an EU member state can take to re- join the EU, and argues that there should be a policy of expedited accession available for secessionist states.
2.The common view: Article 49 of the TEU
In any successful bid for independence, it is unlikely that EU membership would automatically follow. The list of EU Member States, and their varying degrees of participation, is fixed, and therefore the addition of a new state must be done through the process set out in the treaties (Case 43/75 Defrenne II) – either Article 48 (renegotiation to extend the geographical scope of existing treaties) or Article 49 (admission of a new state to the EU). Both require unanimous ratification. While it may be easier to become a member via Article 48 since the treaties already apply to the new state prior to independence, I argue that the basis of utilising Article 49 would be more legitimate and reasonable.
On the most general level, Article 49 is the lex specialis for new members to join the EU. As Armstrong points out, the objective of Article 49 is to verify that a new Member State would be able to fulfil its obligations under EU Law. There is no reason why a new state, albeit one that already applies EU law, should not be subject to the same inspection. This is because the parent state would likely still assume a proportion of legislative competences, and it remains unclear if the new state would be able to perform well in policymaking in these areas.
Next, extending Article 48 – which is meant for amending the relationship among member states - to create a new member state may violate the Principle of Conferral. Existing members may not look as kindly on such an unprecedented extension to EU competences, particularly with the growing wariness as to the speediness of the growth of EU competences (Gauweiler) in recent years. There seems to be a trend, observed by Mancini, that the closer the union moves towards statehood, the greater the resistance to the attainment of this goal becomes. It is also notable that in Article 50, the process prescribed by Article 49 was recommended should the former Member State decide to re-join the EU. It is clear that the legislative intent behind the TEU is to designate Article 49 as the primary process of joining the EU, and Article 48 should not be used to frustrate the purpose of Article 49. Perhaps, to put more simply, with new statehood that views the new state as separate from the parent state, the new state cannot claim to enjoy membership by (implied) extension of the parent state’s membership, since this contradicts the whole point of secession.
It has been argued that the predecessor of Article 48 – Article 236 of the EEC treaty – has been used to allow Greenland (previously subject to Danish Home Rule) to leave the EU and that this leaves open the possibility of utilising Article 48 to admit a new member. However, leaving the EU and joining the EU are vastly different; the EU is a “new legal order”, an “overriding and independent source of law”, and membership entails acceptance of the existing and future Acquis Communautaire.
3.The possibility of automatic membership
The use of Article 49 is uncontroversial, but some prefer to consider the possibility of automatic EU membership. This hinges on the need to ensure continuity in the application of EU law in the new state rather than plunging them into a period of non- membership abruptly, and to protect EU- law rights of both citizens of the new state and other member states. As Closa argues, the most favourable scenario will be one where “membership and its benefits would never disappear for the seceding state and its citizens”. This is not an implausible suggestion, as the new state – having applied EU law in the past – would be able to assimilate into the EU legal order more smoothly because it is institutionally capable of doing so. Academics proposing divergence from the formal entry process cite the sui generis nature of EU law, and argue that the regular Article 49 process relies too much on principles of Public International Law. Many have emphasised the need to protect and give continuity to EU membership which is of “fundamental status” (Grzelczyk).
However, there are several obvious flaws with automatic membership. While EU citizenship and rights are indeed “fundamental”, Article 9 of the TEU states that EU citizenship should be “additional to national citizenship and shall not replace it”.
This means that EU citizenship is dependent on having the nationality of an EU member state. Secession is to be distinguished from exit, because the latter involves a nationality that, technically, used to have EU membership attached to it. Tierney argues, in addition, that even in cases of exit, it “cannot realistically be argued that the nationals leaving the EU would continue to be treated by the CJEU as EU citizens”. It should also be mentioned that whether citizens of the new state lose citizenship of the parent state is a matter between these states themselves; it is unlikely that they will continue to be citizen of the parent state as this undercuts the independence of these new states.
On a practical level, it is also ironic to allow “internal applicants” when the EU has scaled back its efforts to widen the EU; Juncker has already declared that no new accession will take place in this term.
Automatic membership leaves many questions unanswered: what type of relationship would the new state enjoy with the EU? Would the same opt-outs be adopted? It is the need to settle these questions, and not simply declare EU membership as ‘continuing’, that makes the case for formal accession compelling. However, formal accession is also slow because of the negotiation process, which, based on principles of Public International Law, can only take place after a secessionist state becomes a state, as it is then that a secessionist state is capable of conducting foreign relations.
4. Expedited entry
Sir Edward proposes a more nuanced argument, in favour of continuity of the application of EU law, that while the new state would not accede automatically to membership, the flexibility of the sui generis nature of EU law would lead to a more favourable outcome, namely, an expedited process of re-entry. The same requirements of Article 49 would apply, but EU institutions and member states would be under a duty to start negotiations and determine the relationship between the new state and the EU before separation takes effect. He cites Article 50 as an example of the complex negotiation process of withdrawal due to the spirit of reciprocity within EU law, which means that there is a need for a mechanism to reduce disruption to EU citizenship rights due to withdrawal.
He does not see withdrawal as a result of secession and withdrawal as a result of triggering Article 50 as very different, and believes that the article 50 negotiation process ought to be extended to cases of secession. Otherwise, if pre-exit negotiations are not available, new states would find themselves completely changed the moment independence happens; as Edward puts (in the context of Scottish Independence prior to the Brexit vote) “at the midnight hour...Erasmus students studying in Scotland would become 'foreign students' liable to pay full third country fees, as would students from England.” This would seem almost absurd and highly disruptive to rights of EU citizens.
Edward’s case is compelling. Prima facie, the distinction between using Article 50 for withdrawal and the act of secession resulting in withdrawal by the operation of law seems quite technical; the process involved – stripping the citizens of the new state of the union citizenship rights and removal of the effect of EU law – is exactly the same. The difference in context should not be used to mask the identical nature of the complexity of withdrawal. After all, the gap between a legitimate vote for independence and actual independence is one that is merely technical – to settle existing arrangements between the parent and new state. Discussions between them will unlikely affect the type of ‘deal’ the new state wants with the EU. An expedited manner of entry respects the new state’s independence by according it autonomy to pursue the ‘deal’ it desires with the EU, while placing continuity and the protection of EU rights at its very core. Negotiations should start when it can be made sufficiently certain that independence is a matter of mere technicality and that there is a desire to continue arrangements with the EU. If a deal cannot be reached before independence, an interim deal protecting the right of EU nationals, such as extending the European Communities Act 1972 to the new state until it is officially admitted as a member of the EU, would be a good solution.
The most significant problem with this approach would be the incongruity between Public International Law and EU law. The argument in favour of formal accession rests on the assumption that the new state would not be able to negotiate before becoming independent. This brings to fore the question of whether it would be the parent state negotiating on behalf of the new state. However, the CJEU has often questioned the supremacy of Public International Law over conflicting EU law. As Edward argues, it is the treaties that one should refer to, before referring to principles of Public International Law, for doing so reflects sui generis nature of EU law. Thus, it may be possible to transcend the strict binary classification of state/non-state in international treaty negotiations. After all, few treaties are as penetrative and demands as much continuity as EU law.
I have argued for expedited entry, in the interest of protecting EU law rights and preserving continuity of the EU law. The fear that this could encourage secessionist movements in their own countries is unwarranted; states which are capable of initiating discussion with the EU are likely to be those which have reached an amicable agreement with the Parent State for independence.
MORE THAN JUST MEMES
For some, the EU’s recent copyright reform package– formally the “Directive on Copyright in the Digital Single Market” – has been a long time coming. It will replace an almost 20-year-old Copyright Directive and (or so the European Commission hopes) make EU copyright rules fit for the digital age.
But it’s not been without controversy. Supporters hail the proposed directive as a win for creators in finally closing the “value gap” – the difference between the remuneration received by content creators and the profit made by internet platforms when they make their works accessible. Tech giants such as Google, Facebook and YouTube would be forced to share their revenue with European media and publishers. “This is not about censorship of the internet,” insist the CEO’s of several media companies in an article published in the Guardian, “The primary focus of this legislation is concerned with whether or not the internet functions as a fair and efficient market place – and currently it doesn’t.”
Others aren’t convinced, with article 13 being one of the main sources of discontent. “One of the most wide-sweeping internet censorship rulings that I’ve ever seen”, says Wikimedia founder Jimmy Wales of the directive; “The European Parliament is putting corporate profits over freedom of speech” says Julia Reda, the German Pirate MEP. The copyright directive has widely been attacked as a threat to fundamental rights, and in July 2018, the Italian Wikipedia shut down in protest.
Despite fierce opposition, and despite the European Parliament rejecting a draft of the directive in July 2018, MEPs approved an updated version with minor changes in September 2018. It’s now up to the European Commission, the Council of the European Union, and the European Parliament to negotiate the final language of the directive; it’s likely that this process will finish by early 2019. The directive will then be implemented by the 27 member-states. But with what consequences?
Erosion of fundamental rights
The aim of the directive may not be to censor, but legal opinion weighs heavily in favour that it will – particularly thanks to article 13’s proposal of an automated filtering software. Thomas Margoni, senior lecturer in Intellectual Property and Internet Law at University of Glasgow, expresses concern that this will “undermine the free internet”. Article 13 makes online content sharing service providers responsible for the copyrighted content uploaded by users. Platforms would have to take a far more pro-active role in policing what is being uploaded on their servers, giving them strong incentives to be overly restrictive so as to avoid being held liable and fined. But, as Margoni states, “it is the users and free speech that will bear the brunt”. Users will be prevented from posting anything that might be covered by copyright: videos, songs and images for example, not to mention derivatives such as parodies, criticisms, discussions, news, and yes, memes.
Not only is the copyright directive likely to restrict freedom of speech, but it’s also incompatible with the fundamental rights the EU is supposed to have been built on. Consistent jurisprudence of the ECJ highlights that an automated filtering software such as that proposed by article 13 would fail to strike a fair balance between copyright and other fundamental rights. Case in point: Sabam vNetlog.Sabam, a collective management organisation, wanted to impose on Netlog’s social networking platform a system for filtering the information uploaded on Netlog’s servers. This filtering system was held by the ECJ to be in danger of violating fundamental rights, such as “the right to intellectual property, on the one hand, and the freedom to conduct business, the right to protection of personal data and the freedom to receive or import information on the other”. In light of this case law, and seeing as this filtering system is very similar to the measure proposed under article 13, the new copyright directive can’t be considered compatible with the fundamental rights and freedoms guaranteed under articles 5, 11 and 16 of the Charter of Fundamental Rights.
Helping, rather than hindering tech giants at the expense of smaller businesses
Indeed, there’s no evidence that the copyright directive will even achieve the directive’s aim of forcing tech giants to share their revenue with content creators – or that a “value gap” exists at all. Giancarlo Frosio dismisses the value gap as “a discourse almost exclusively fabricated by the music and entertainment industry.” No empirical evidence exists to prove that there is one: a Report commissioned by the European Committee and delivered in May 2015 revealed that there is no “robust statistical evidence of displacement of sales by online copyright infringements”. Quite the opposite in fact; the report even showed that, at least in the case of video games, “the estimated effect of illegal online transactions on sales is positive – implying that illegal consumption leads to increased legal consumption”. Content creators have been “flourishing” rather than “withering” in the European Digital Single Market, as Frosio colourfully explains. The proposed copyright directive risks ending up obsolete or even detrimental to the Digital Single Market because there is no real value gap to close.
Even if the alleged value gap did exist, the effect of the EU copyright directive would not be to close it. Indeed, the directive would largely force tech giants to do what they’re obliged to do already. From their debut in the mid-2000s to their growth into global players, large tech firms such as Google, YouTube and Facebook have “progressively adapted their business models from head-on challenge of copyright norms to adopting a more accommodating attitude towards copyright holders and press publishers”, says Maurizio Borghi. For example, YouTube has already agreed to share revenue with copyright holders using a technology called Content ID, which detects and manages copyright protected music and video uploaded by users. It’s true that the copyright directive could potentially reduce the bargaining power of these companies with respect to the arrangements that they have to make with content creators, enabling content creators to increase their share of revenue; but that’s still far more significant to small businesses rather than the large American companies the European commission seeks to target.
As Frosio states, “The proposed legislation might have unforeseen effects that would favour established market players, rather than competition in the content market online”. The extra costs that companies like YouTube will have to pay for contracts with copyright owners represent a small and insignificant portion of their profits. For new, smaller companies, however, these costs are huge and could leave them with no serious prospect of winning a substantial market share or challenging the dominance of tech giants. Filtering technologies such as copyright recognition software, moreover, remain expensive for small businesses, only adding to the additional costs they must shoulder. The copyright directive would be a bad deal for European companies, since most established market players are US-based, and “push the Digital Single Market further away, rather than promoting it”. This cannot be what the European Commission intended from the directive.
Perhaps the most significant issue with article 13 is that the filtering technology it requires does not exist. “There is no software that can expertly tell problematic uses of copyright material from allowed uses such as criticism, parody or news reporting, and accurate content recognition is still poor”, asserts Thomas Margoni. This makes “false positives” (the incorrect identification and removal of content) a widespread problem. University lectures and conference talks on copyright law, for example, have been blocked for copyright infringement.
Indeed, the Impact Assessment accompanying the proposal for the new directive itself acknowledged this problem – but it suggested that the procedural safeguards in article 13(2) that member states put in place “complaints and redress mechanisms” in case of incorrect removal of content would be sufficient. But such a vaguely sketched provision could have a significant impact. Studies have shown, for example, that improperly formulated redress mechanisms have a “chilling effect” on users, who are dis-incentivised from using them to exert their rights. The CJEU in Sabam v Netlog moreover makes no indication that these kinds of mechanisms are at all capable of addressing the harm caused by incorrect removals in blocking “lawful communications”.
To sum up, article 13 of the EU copyright directive is nothing short of a train-wreck. It addresses a problem that doesn’t exist at the expense of fundamental rights and small businesses, and will help to reinforce the dominant position of US-based tech giants rather than force them to share their profits with content creators. The filtering technology that we have now is riddled by “false positives” and their use as required by article 13 will constrain the legal use of protected material. If the European Commission wants to promote the Digital Single Market through the modernisation of EU copyright rules, this directive is the wrong way to do it.
A BLIP IN TIME
Choi Ching Jack
Not everyone with a criminal record is truly dangerous. In some cases, these “criminals” are, observably, no different than most of us. They may well be of ordinary prudence and mental temperament, but due to one out-of-character mistake, they are given a sentence that may haunt them for the rest of their lives.
The person-situation debate (or individual-situation debate) is a long-standing controversy in the field of social psychology: two contradicting schools of thought that seek to explain why humans behave the way they do. On the one hand, personality trait psychologists believe that the cause of human behaviour lies within the individual himself: that a person’s personality is what defines his actions in any given situation. On the other, situationists argue one’s personality cannot be the sole motivator for his behaviour, rather, the situation has influence on how the person acts.
The significance of this debate was demonstrated in the infamous Stanford Prison Experiment, conducted by psychology professor Philip Zimbardo in 1971. While this study has been criticised for its lack of academic rigor, this experiment undoubtedly, and importantly, demonstrates one thing – that human behaviour is not purely caused by individual factors, nor is it wholly due to situational influences, but the interaction of both. In other words, one’s personality is affected by the situation, and the effect of the situation is dependent of the personality of the person.
Before the experiment, Zimbardo ensured that the participants he recruited were of sound mind, were psychologically stable, and had no prior criminal convictions. This was done through interviews, background checks and psychometric testing. This point is worth remembering – all of Zimbardo’s participants were normal and did not possess any observable characteristics of a criminal. The participants were then categorised into two groups: prison guards and prisoners, and were brought into a prison simulation which prisoners were to live in for two weeks.
As the study progressed, the guards became increasingly aggressive: they started to bully the prisoners and violate their rights, slowly and unconsciously forgetting the fact that they were in a simulation and that the prisoners were innocent recruits just like themselves - they had adopted and internalised the identity of ‘prison guard’. The circumstances got so out of hand that Zimbardo had to call off the entire study eight days earlier than anticipated (it was terminated by the sixth day). This shows how potentially powerful situational influences can be – it can cause normal individuals with no hysterical tendencies to act in vicious, unimaginable, inhumane ways.
However, not all the guards were mean and abusive. In fact, some were fair: playing by the rules and helping the prisoners; some were quiet and passive. This demonstrates something else: whilst a situation can exert influence on one’s actions, individual personality traits can predispose a person to act in a specific manner. Some people are predisposed to succumb to influences that cause aggressive behaviour more easily than others. Zimbardo’s findings illustrate that human behaviour cannot be blamed by individual characteristics itself, nor is the situation solely liable for causing such actions.
To judge the entirety of one’s personality and worth from one particular instance in one particular place at a particular time is, therefore, to go against human nature itself. Unfortunately, this is the case for criminal justice systems in most jurisdictions.
This brings me to case of Dr. Hadiza Bawa-Garba, a doctor in Leicester who was convicted of manslaughter by gross negligence over the death of a 6-year-old boy. She was subsequently struck off earlier this year. This case illustrates how unaware the law can be on situational influences and human factors. During trial, all hospital procedures that were necessary to keep the hospital environment safe during the day of the incident were not heard in court. The law should stand back and examine the bigger picture through a whole-system analysis, it should not just look at the actions of one individual in order to reach a decision. But that’s exactly what the law has done. The Medical Service Tribunals Service mentioned, after looking at the facts of the case as a whole, that Dr. Hadiza should be restored in the medical register. But due to the law’s insensitivity, a doctor who had an unsullied reputation before, was convicted of a major crime and was stripped off her ability to practice in her field again.
Let us now consider the fictional case of Boyan. Boyan is a law student at a prestigious university in the UK, and is currently studying a Dual LLB/Juris Doctor (JD), where he would spend his last two academic years abroad at Columbia University, New York. Boyan takes the bus to university. One day, on his way to a Tort Law tutorial, he realised he had left his Oyster card back home. Fearing retaliation by his tutor, he exclaimed “Screw it!” and hopped on the bus without paying the fare. A police officer was on the bus as this was happening and the officer witnessed Boyan’s misdemeanour. Under Transport for London bylaws, Boyan would be charged a penalty fee or be prosecuted. Due to his criminal record, Boyan would be unsuccessful in his visa application to the USA when it is eventually time for him to go abroad. It would also be impossible for him to get a job as a police officer, government agent, or even a lawyer! People with criminal records can also be barred from becoming teachers, working with kids or even buying a house.
Some might say that this is to treat bullies as victims - making tyrants who deserve criminal sentencing seem like they were the ones who got hurt. But let us not forget that the enormous impact a criminal record can have on someone’s life. The person will forever be haunted by the label of ‘criminal’, as challenges await her in her search for a job, a visa, application to become a lawyer or perhaps even a partner. Criminal records with such serious consequences cannot be given lightly. Are we not human too?
The criminal justice system is heavily skewed towards the ‘person’ side of the person-situation debate. It fails to account for the inconsistent, multidimensional, situation-dependent nature of human behaviour, and will often judge a person based on a single, uncharacteristic moment in his life. Is this fair to the defendant? Are we so unwilling to provide people with a second chance?
This is not to say that the criminal justice system is completely faulty. At the very least, our system is not at the worst end of the spectrum. We still give second chances. In the UK, the police or the Crown Prosecution Service (CPS) may give out a formal caution or a pension notice for minor offences, such as writing graffiti, shoplifting or being drunk in public, as opposed to prosecuting them. But like any system, it needs refining. Police cautions should be administered more frequently, especially for young offenders, and should be extended to a wider array of minor offences, for example, tube fare evasion.
The system is brutal; it takes no chances – “I don’t know whether you’ll commit more crimes in the future, but I should put you in the criminal record database. Better safe than sorry.”
IGNORANCE ISN'T BLISS
An individual’s identity - who they are - is multi-faceted and entirely unique. The cultures with which one associates are part of this identity: through exposure to another culture, one develops a better understanding of an aspect of the identity and way of life of people within that culture. Learning about other cultures often puts us outside of our comfort zones into unfamiliar territory, sometimes acting as a deterrent for doing so. However, this short-term discomfort is extremely rewarding in the long run; taking the time to learn about someone else’s culture shows respect and good will, eliminates ignorance, and overall improves relationships within society. This primarily drives my interest in Indian culture, which takes its own shape in interaction with the law.
Law and culture are intrinsically linked; this became clear to me through coursework on my LLB program, an internship in Mumbai, and the UCL South Asian Legal Studies Forum. I found that venturing to understand the identities of individuals different from oneself requires courage, and in pursuit of this goal, the law and its machinery provide a fascinating window into culture, life, and tradition.
A person’s identity and environment condition their unique relationship with the law. England and Wales being quite diverse, one person’s relationship with the law will be distinct from his neighbour’s - having come from a different family with another way of life and traditions. It was this multicultural fabric which enabled me to learn much about Indian cultures as they exist within the United Kingdom merely from reading caselaw.
For instance, take the case of R v Dhaliwal. On its face, the case is one of great importance to the current UK criminal law relating to causation, offenses against the person, and manslaughter. However, an article responding to the case, written by J. Horder and L. McGowan, entitled ‘Manslaughter by Causing Another’s Suicide’, reveals the potential for the underlying cultural identity of involved parties to inform developments in the law.
Horder and McGowan examine the facts of the case in light of the cultural background of the defendant and victim, a British-Asian married couple. The authors draw attention to the abundance of similar domestic- abuse-related suicide cases amongst Asians of sub-continental origin residing in England and Wales, which they attribute in part to specific social and financial pressures within that demographic. An example of one such pressure is the potential reluctance of women to report abuse due to social stigma against doing so, choosing to bear the burden silently rather than seeking professional help. From this cultural perspective, the authors consider how the law could most effectively operate in cases like R v Dhaliwal, where victims are influenced by nuanced pressures.
Reading English and Welsh cases and academic opinion not only leant me a greater understanding of the law of the land, but also exposed me to cultures within the U.K. of which I had previously sparse knowledge. To one unfamiliar with South Asian culture and that of the South Asian Diaspora -- the spread of people from the region to other areas of the world such as the U.K.-- it may be difficult to relate to the cultural undercurrents of such unique pressures, and lead to misunderstanding of the pertinent issues.
However, by taking time to read the caselaw and academic opinion in a cultural light, one discovers the origins of and explanations for these patterns; making an effort to understand why many incidents of abuse go unreported amongst members of the South Asian community is much more helpful than trying to combat a problem through general, broad pathways. Knowledge of the unique pressures faced by specific groups, along with the underpinning cultural factors reinforcing them, invaluably informs discussions of how the law should operate.
I spent this past summer interning at a law firm in Mumbai, where I also noticed the interaction of law and culture. The legal profession provided an extraordinary lens through which to learn about the culture of Mumbai, particularly because practising advocates in India are primarily Indian citizens. I went alone, and with no familial ties to India, I found it to be a great test of my own courage yet a wonderful opportunity to learn about the place first- hand.
I gleaned a lot of information from listening to Indian advocates and observing their demeanour both in and outside the Bombay High Court. The languages one speaks are an intrinsic part of one’s identity, as are those spoken on a broader societal level. I don’t speak any Indian languages; I can understand basic Hindi, and I can read and write its Devanagari script. During my first days in the courts of Mumbai, I dwelled on my discomfort of my inability to understand what was being said around me. My thought process prohibited me from focusing, until I realised that this attitude was not productive.
So, I changed my outlook; I realised that, although I couldn’t understand what was being said, I could glean information just from observing what language was being spoken. Bending an ear, I found the languages spoken varied depending upon the level of the court itself. All proceedings in the High Court of Bombay are in English, whereas in lower courts there is spoken a mixture of English, Hindi, and Marathi, a local language. As many experienced speakers switch between languages unconsciously, the chosen language is a subtle indicator of the nature of the situation. English is reserved for formal situations and occasionally used between those with the opportunity to have learnt it, mostly the advocates and judges, whereas the clerks and administrative staff largely communicated in Hindi or Marathi.
It is clear that learning English is an opportunity for those with available time and resources, and as such opens doors to a higher tier of career options. The courts are a fascinating way to observe the linguistic practices and demarcations of a region. It is easy and all-too-tempting to resist discovery of a different culture merely because you can’t understand the spoken languages. However, by shedding stubbornness, and focusing not on what you can’t understand but what you can understand, you will observe patterns that give quite a bit of insight into the identity of a place, even in a legal setting.
The relatively-new National Company Law Tribunal (NCLT) hears matters regarding corporate insolvency. Several NCLTs across India service multiple states, with an overarching National Company Law Appellate Tribunal in Delhi. I often visited the Mumbai bench of the NCLT, which has territorial jurisdiction over three states of India, and is located on the sixth floor of a tall office building, accessed by a single elevator and set of stairs.
After climbing the flights of steps, dripping sweat in long workwear, one would inevitably find each of the NCLT’s three courtrooms to be nearly physically impossible to enter due to the volume of advocates, clerks, and interested parties inside. On the occasion that I would be able to squeeze myself in, the density of humans in the room distracted from the legal proceedings, and I felt very out of place and uncomfortable. After all, when there is such limited space, I figured that surely an advocate should be entitled to occupy my spot in the courtroom, as I was just a mere intern.
In these conditions, I wondered how the advocates could find mental space to perform their job. I once asked an advocate at my firm how he concomitantly dealt with this environment and his professional function, and he answered, ‘It’s okay, if you can get a seat.’ I found that this attitude was widely shared; nearly everyone dealt with these conditions with little more than a quick knowing smile, going about their business as usual.
From an outsider’s perspective, it would be easy to condemn these conditions, which are quite different from those in UK courtrooms, as a result of lack of government funding or complacency on the part of legal professionals. I, too, might have joined in this critique if I had not stood in those courtrooms, feeling uncomfortable and awkward, and observed the advocates for myself. There, I felt a widespread atmosphere of determination and resilience, and that lack of superfluous space would not impede the operation of the law or the ability of advocates to appear in front of the judge.
The advocates there were dressed well, energetic, intelligent, and I could see that they were committed to delivering the best possible performance. This was not an environment that required sympathy; it was actively invigorating and committed to betterment in the future. From this particular tribunal, I learned a lot about the spirit and identity of greater Mumbai, where inconvenience or discomfort does not stop the flow of life and progress.
Last spring, I attended the launch event of the UCL Laws South Asian Legal Studies Forum; a platform on which students can discuss and debate legal issues relating to South Asia. The event offered an illuminating analysis of the law, religion, politics, and identities of the region. I am thrilled that the Forum is initiating these discussions within the UCL Laws community.
Learning about South Asian identity, as it exists in the region itself as well as in the Diaspora, has not only allowed me to explore culture but also the law. The law has shown me intimate details of life, behaviour, and attitudes of the people to whom it applies. This is especially so due to the sensitive, personal nature of the law; oftentimes, court cases will involve the most cherished aspects of an individual’s world: their family, finances, or freedom. Here, at the intersection between law and culture, where identity is formed, lies the key to an aware and respectful society. And that, I think, is reason enough to push our own boundaries in pursuit of understanding.
On a late July night in Virginia, Bill Bosko returned home to the cold, clammy corpse of his brutally murdered wife, Michelle Moore-Bosko. She had been raped, battered, stabbed, and strangled to death in one of the grisliest murders Virginia had ever witnessed. It took investigators two years to put the ‘Norfolk four’, Michelle’s accused murderers, behind bars. Desperate for a conviction in a case that was beginning to grow cold, they were eager to start pointing fingers, even if it meant relying on unsubstantiated circumstantial evidence which resulted in the arrests of innocent men.
Now, I could bore you with disturbingly compelling evidence of the four men’s innocence, from rock solid alibis to the myriad of inconsistencies in their confessions that led U.S. District Judge John A. Gibney Jr. to claim that “no sane human being” would convict them - but that’s not where I want this article to go. With all the media attention this issue has received (e.g. the new Netflix show “The Confession Tapes”), I can safely assume that we know these men are innocent. We know innocent men sometimes confess to crimes they didn’t commit. We don’t know, however, why innocent men might voluntarily choose to implicate themselves. This article digs deeper into the role of the police force and the reasons why any human would purposely mould the identity of young, innocent men into cold-blooded murderers as well as identifies potential solutions to this issue.
So why did these men confess if they hadn’t done it? Well, it all started in the interrogation room. The interrogation room is not what most of us imagine it to be: it is not always just a room where police officers seek information from suspects by posing a series of harmless questions. It is often a room where police officers badger vulnerable suspects, target their weak points, and use harsh, forceful, and at times utterly inhumane techniques to obtain confessions. With time, the role of the police force has transitioned from an investigative role with the aim of finding as much reliable information as possible to an inculpating role, with the aim of pinpointing the culprit by hook or crook. Ever since the establishment of the Reid technique, the dominant method of investigation used in Canada and the USA, this transition has become more evident. According to a study by the American Psychological Association performed on 631 police officers, 68% of suspects walked out of the investigation room having said self-incriminating statements, 5% of whom are actually innocent. Coupled with this is the fact that 81% of police investigators believe that interrogations must be recorded to demonstrate to the court the tactics used in extracting confessions. The results of the survey are published at the end of this piece.
The results clearly indicate that many police officers believe that a variety of coercive tactics are being used in the interrogation room to elicit confessions. A primary example of detectives who rely on such tactics is Robert Glen Ford, a now-retired detective with an unnerving reputation for eliciting false confessions from innocent suspects. He also happened to be the officer responsible for the wrongful conviction of the Norfolk four.
In this case, Glen Ford did not “make a mistake” that led to the conviction of these men, he deliberately fed them information in order to concoct a fictional scenario in which they were murderers. How did they succumb to his poison? Through the age- old trick police officers use to obtain confessions: the looming threat of the death penalty. There have been many psychological studies assessing the human brain’s behaviour in situations of immense stress and vulnerability. Saul Kassin, Professor of psychology at John Jay College of Criminal Justice, has invested time and effort into this phenomenon, and concluded that false confessions are of two types: compliant and internalised. The first type is the one we’re dealing with in the case of the Norfolk four - four men who still maintain their innocence undergo a harrowing police investigation and ultimately confess to avoid the death penalty.
Investigators gave them false pretences, claiming there were witnesses and solid evidence that directly incriminated them, and, eventually, they confessed despite being aware of their actual innocence. Internalised false confessions are a whole other story. Picture this: an innocent suspect walks into the interrogation room. He undergoes a lengthy investigation with plenty of tactics designed to extract a confession. He leaves the interrogation room with a new identity. He doesn’t sign the confession to comply with police’s orders and evade the death penalty; he signs it fully convinced that he has committed this crime, but due to a number
of reasons (fed to him by the police), he cannot remember actually committing it.
This is exactly what happened to Marty Tankleff, a 17-year old who woke up one morning to a house with two unconscious parents lying in pools of blood. As he dealt with the trauma of this shocking discovery, police officers were taking advantage of his vulnerability and feeding him lie after lie. In less than a month, 17-year old Marty transformed from a loving son to the callous murderer of his own parents. Perhaps he could’ve maintained his identity if the court was able to clearly see the coercive techniques used in the investigation room by detectives.
The solution to this issue lies in the interplay between psychology and criminal law. Criminal lawyers, both prosecutors and defense lawyers, must be aware of the psychological effects trauma can have on suspects, in order to ensure a fair trial where all circumstances are taken into account. The purpose of interrogations should be reconsidered, especially in countries like the United States where the tone of the investigation is often accusatory. In the words of Saul Kassin, “the opening salvo of an American style interrogation is an accusation of guilt. And a refusal to accept denials” Through his psychological research, Kassin also observed that England has witnessed “a sweeping reform...to move away from the confrontational style investigation to what they call investigative interviewing, where the goal is to gather information, to figure out what happened – to determine whether or not this person has information. Secondarily, to solve the crime by confession.”
In 2016, the Norfolk Four earned their freedom after persistent appeals and the support of DNA experts, forensic pathology experts, and law enforcement
experts, as well as Judge John A. Gibney Jr.’s fair trial.
However, thousands of innocent convicts continue to bang their cell walls awaiting help. As more innocent clinics (perhaps Innocence Projects sounds apt?) open, the world is in need of more people willing to invest their time and energy in those who have had their lives ironically snatched away from them by the law itself.
When Leofric, Earl of Mercia told his wife Lady Godiva that, if she rode through their village naked, he would uplift the oppressive tax regime imposed on the villagers, she sailed through the streets covered in nothing but her long hair. Humbled by her actions, the villagers decided to enclose themselves in their houses so as not to see her naked body. That is, all except Tom, historically known as Peeping Tom, who peered out of his window to satiate voyeuristic curiosity. Since Lady Godiva’s noble ride across the village in the 11th century, society has been dealing with difficult questions around the criminalisation of voyeurism. If she rode through the village naked, was it reasonable for her to have had an expectation of privacy? Was her act a ‘private act’ as required by the offence? Does criminalising voyeurism unfairly infringe individual liberty?
The Voyeurism (Offences) (No.2) Bill 2018
The legend of Lady Godiva and these perplexing socio-legal questions regain momentum as the House of Lords prepares for the second reading of the Voyeurism (Offences) (No.2) Bill on 24th October, 2018. The Bill specifically deals with the issues of 'up-skirting' and 'down-blousing'. An ‘up-skirt’ picture is a picture taken by a man using a hidden camera directed up a female skirt. A 'down-blouse' picture is one that involves a man using a hidden camera to take a picture of a female's breasts, bra or cleavage.
The current law on voyeurism is governed by s.67 Sexual Offences Act which does not cover up-skirting or down-blousing offences specifically. Instead, the common law offence of outraging public decency has been employed in previous cases dealing with similar issues. The offence requires the conduct to take place in public and for two or more persons to have been present and, capable of witnessing the act. This common law offence, however, has proven inadequate to deal with photographs of said nature, as its focus is on protecting the public from potential exposure to lewd, obscene or disgusting conduct rather than protecting the individual victim. Moreover, s.67 has also proved to be inadequate in dealing with up-skirting and down-blousing due to limitations it imposed in making out an offence: the act only covered victims who were engaged in a (i) private activity at the time and (ii) the defendants had to be observing the victim for sexual gratification.
The case against criminalisation of voyeurism
We live in an increasingly liberal western world, where social media and mass public surveillance pervade almost every sphere of life . The young thrive on exhibiting their daily lives and tracking others on social media; employers track employees’ progress and career graphs on websites, it is worth asking why we still object to voyeurism as a criminal offence. In fact, with increasing liberalism, the belief that each individual should be entitled to his own wishes seems even more viable. John Stuart Mill’s ‘Harm Principle’ prescribes that an individual’s liberty should be upheld at all times other than when it curtails someone else’ liberty. Voyeurism only causes harm to those who know they have been victims of voyeuristic eyes, which forms a surprisingly small percentage considering the amount of voyeuristic activity that goes on.
On the contrary, I argue that the issue is entrenched in human psychological behaviour and complex social issues. The issue deals with the subjective standard of acceptable norms and mores, thus it must be viewed with a more nuanced lens. It may not be the case that every offence must be harmful in order to warrant criminalisation: incest and obscenity may not harm someone per se, but they still offend public social morality and sentiment. The fact that
SilkvBrief | 50an act offends could be adequate to warrant criminalisation in the interest of maintaining social harmony and peace. Joel Feinberg recognised this and coined the ‘Offence Principle’, by which he asserts that an act of offense should be considered a sufficient reason to interfere with the liberty of a citizen in a democratic state. This principle sets a lower benchmark for criminalisation of acts, which appears to be more appropriate in modern democratic society. Nonetheless, the Offence Principle can lead to a slippery slope that should be carefully treaded. For example, homosexuality ‘offended’ democratic, so-called liberal societies until recently, however society now acknowledges, that homosexuality is not offensive (for most). The question, then, is whether voyeurism is truly offensive to mere puritanical orthodoxy, or to some groups of society.
Should we take offence? The Gendered Nature of Voyeurism
If we accept Feinberg’s rather persuasive argument that the possibility of offence is a sufficient benchmark for the criminalisation of an act, the next question is whether up- skirting or down-blousing truly offends or should offend contemporary liberal society. It is a fair question to pose in the age of exhibitionist movements such as the ‘Free The Nipple’ campaign and an increasingly
popular trend of celebrities posting nude photographs on social media. In light of these developments, is it hypocritical to go as far as criminalising, say, the Daily Mail for publishing a photograph of the Duchess of Cambridge in an unflattering situation?
This argument would hold if up-skirting or down-blousing was a non-discriminatory offence; an offence which involved pictures of men’s crotches as much as it involved pictures of women’s cleavage. However, this is not the present case. The very name of the offence, ‘up-skirting’ and ‘down- blousing’ indicates the highly gendered nature of this form of voyeurism. Cases almost always involve women being filmed while they are using lavatories, changing clothes, or are seated in a compromising angle. Therefore, as members of a contemporary liberal society striving for gender equality, up-skirting and down- blousing is absolutely offensive, because it categorically targets a particular gender group of society. Moreover, it is vital to remember that the aforementioned exhibitionist movements are consensual acts whereas up-skirting and down- blousing are definitely not. It is especially significant that voyeurism need not be considered offensive in the traditional, puritanical way that prescribes that women should ‘protect their modesty’, but it is an offence to the tenants of equability and is in fact a manifestation of patriarchal norms that still underlie our society.
A good step forward
The Bill discussed above removes the legal limitations of sexual motive and private- ness for up-skirting and down-blousing, finally acknowledging that these practices are often motivated with the intention of bullying and blackmailing; with misogynistic motives to use the photographs as instruments of objectification and entertainment. The Bill is also attractive for the feminist cause, as it provides a specific offence of up-skirting to be brought to court instead of the more general ‘outraging public decency’ offence which was used initially. The previous offence focused more on the aspect of the scandalous effect the picture would have on public norms of decency instead of focusing on the plight of the victim who has had her bodily autonomy violated. New Zealand was quick to recognise this and introduced specific legislation to tackle up-skirt images as early as 2006. Ironically, the New Zealand model was influenced by the UK provision on voyeurism - however, the former colony was quick to recognise the difficulty that the ‘private act’ limitation of the provision would impose.
Perhaps a bigger step?
While the Bill is undoubtedly a good step, (albeit a relatively delayed one) in the right direction, the question that arises is whether it goes far enough? Whilst it removes the ‘sexual gratification’ limitation for up- skirting/down-blousing, it seems to retain the need to prove this motive for all other cases of voyeurism. There have been numerous cases involving women being filmed, photographed and observed in the nude, in their undergarments and while they’re engaged in sexual activities. While the Bill finally addresses the misogynistic undertones that underlie the offence of voyeurism, restricting progress to the subcategory of up-skirts and down-blouses would be very disappointing. It is hoped that the House of Lords, in their second reading of the Bill on October 24th 2018, recognises this deficiency and rectifies it. Just as the villagers of Coventry owed Lady Godiva their gratitude, the British law-makers owe the women of their country that much.
HOW TO BE A LAW STUDENT
Being a law student, it’s a big thing.
You’re told as much from the minute you hint to your sixth form college career counsellor that you might be interested in doing a law degree. You’re intimidated, sure. But you like the feeling of power that the concept of studying law imbues in you. It makes people see you as something else – they imagine you as a big-shot in a steel-and-glass tower somewhere in London, all suited and booted, with a five-figure starting salary waiting to be cashed in. You like that image of you too. You decide to stick with it.
You start telling your friends you want to study law. Most of them are impressed; they think that only ridiculously clever, academic, and verbose people study law. You don’t feel inclined to correct them, and who could blame you? It feels good, rising to the top. You read the books your career advisor suggests you put on your personal statement. Maybe you read a little more. You start popping little factoids about the law into casual conversation whenever you can; mostly as jokes, but sometimes as serious fact. People are even more impressed. You start owning the idea of being a law student. You pass your exams. And then you go to university, and you start being a law student.
And then it all starts to go wrong.
You find out quickly enough that law students are expected to graduate and become lawyers – and not just any lawyers, but Magic Circle solicitors. If your ambitions fall any shorter than that, your friends and your dearly beloved Law Society committee reps begin to persuade you to consider otherwise. And if your ambitions don’t even involve the law in any kind of way, you might as well stop wasting your parents’ money and go home, because you’re not a real law student. Law students make their degrees their mother, their father, and their God. Law students are lawyers from the start and lawyers to the end. Law students are in this for life.
As someone who maybe isn’t 100% sure that you want to be a solicitor, let alone any kind of lawyer at all, you swallow your discomfort and allow yourself to be funnelled through career nights every single week with all your other fellow law students. You listen to the moronic regurgitated platitudes that every single firm representative seems to spout about their employers, week in week out, and you wonder if law students ever get a break from this. It becomes clear soon enough that law students never get a break. At all.
If they want an internship in second year and a training contract before graduation, law students need to beg for it. Law students need to be able to charm, to captivate, to kiss ass. Law students need names to put on their training contract forms, names who would be “interested” in the progress of their application, names of hopefully a partner or a senior associate but anyone will have to do – names have power but some names have more power than others. Law students wine and dine with the firm representatives of their choice; law students lick boots and grease palms and put on their sweetest smiles for the face of the firm they want to sell the rest of their lives to, all in the hopes that they’ll be even considered for one of those murderously coveted vacation scheme places. “Law student” is synonymous with “suck-up”, with “sellout”, with “teacher’s pet”. Law students have no souls, because that is the price of admission into London’s best, richest, and most faceless LLPs – and that is all law students dream their lives will be.
“Pursuing your dreams is so important,” a law student tells you one day. “What’s your dream job?”
“I like to draw,” you say. “In a perfect world I’d be an artist.”
“That’s a stupid fucking dream,” replies the law student. “Have you thought about joining Norton Rose Fulbright instead?”
You think about it, and then you start to think instead that you might not be a real law student, because as much as you enjoy your degree on an academic level you don’t really seem to fit in like all the other law students in your lectures and tutorials. You don’t really have many law student friends either, and you should probably fix this. You think you might get along with some law students you talked to once or twice at freshers’ week, so you invite them to the plays you put on and the events you host with other university societies. None of the law students show up.
Law students do not have time to listen to go to art galleries in the morning or to listen to radio shows in the evening. Law students follow the very simple monastic lifestyle of studying all morning and partying all evening, because excess begets excess - whether you’re reading case studies for six straight hours or turning your brains to mush with tequila and absinthe. Law students are not only disciples of the law and their chosen law firm; law students pledge their nights to drinking and clubbing and fucking each other so they keep it within the course, just like the royals of old liked to keep it in the family. Law students stick together, drink together, live together. Law students do not interbreed outside their degree. Law students do not deviate from the lifestyle their degree makes them live. Law students do not know a world outside their degree. Law students know themselves, and only themselves.
You know yourself too. You know you have a history of self-harm and you know you often entertain thoughts about how you might like to kill yourself one day. You know yourself, and it is a flawed self, because some days you can’t even bring yourself to get out of bed and other days you just want to step into traffic and hope that the car in front is travelling fast enough to break all the bones in your body. You try and talk to someone about this but the line to see the university counsellors is longer than the line waiting at the gates of hell, and you know you cannot talk about this to any law students either because law students never break down; never take a day off. You imply to a law student that you might be depressed one day and you get told that you should never have taken such a demanding degree if you didn’t think you could’ve handled it. Inside, you wonder if it really is the degree, or if it is the law students who are the ones that you can’t handle.
You eventually come to the conclusion that you are not a law student, and you never will be. Being at university widens your worldview, but being a law student just narrows it down all over again. Being a law student means being corralled into a corporate career that you’ll have to beg and steal and borrow to get anyway. Being a law student means studying with the rest and partying with the rest and never getting any rest, or being ostracised from the rest. Being a law student means crawling over the dead bodies of your competition into the arms of a 70-hour work week and a lifetime of having enough money for everything except the soul you lost on the way to the top. Being a law student means having no other interests, and no other friends. Being a law student means keeping your anxiety and depression so heavily under wraps until someone finds you with a plastic bag taped round your head in your student accommodation bedroom and asks if you want to quit your course. Being a law student means staying on, and continuing with your degree – not because you want to be a lawyer, or because you love the culture that the course cultivates, but because law students never take breaks. No matter what the cost.
Studying for a law degree is one thing. Being a “law student” is another thing. In fact, being a law student, it’s a bigthing. But it’s not for everyone.
It doesn’t have to be. And it shouldn’t be.
With the phenomenon of people identifying with the opposite gender becoming more and more widespread in western society, we must answer the question of what gender identity actually is, and why it has become such a contemporary talking point.
The official definition of gender identity is ‘a person's perception of having a particular gender, which may or may not correspond with their birth sex’, but it has recently become an important term, with people being taught that it is perfectly normal to associate yourself with the opposite gender. As a result of this, many begin questioning their identities and lifestyles from a very young age. Are they right in doing so, or does this way of thinking risk becoming a detriment to their mental health in the future?
Many individuals seem to struggle with the notion of ‘identity’, as it is not often seen as something malleable - especially by older generations. Despite this, and the fact that people are either born male or female, children who show behavioural tendencies stereotypical to the opposite gender can feel trapped inside their own bodies, limited by the predetermined duality of sex and gender. The idea that the two are independent of one another is one that is fairly new, with the concept of gender being coined as a ‘social construct’ by Judith Butler in 1956. She describes it as being a part of culture: something that a person can only decide for themselves after he or she has become accustomed to the world around them, determining their role to play.
The phenomenon of gender dysphoria, or people being physically uncomfortable with their anatomical sex, is one that is relatively modern. Some sources, such as susans.org, claim that it has existed since ancient egypt, dating back to 1503 BC, but such links to the past are often tenuous and unsubstantiated. They are merely cases of men dressing up in women’s clothes, and even then, these examples aren’t reliable data, as there is little evidence of this happening. To that end, the first instance of actual gender dysphoria recorded in medical literature was in the early 1900s, when some women showed a keen interest in knowledge and writing instead of the overly-strict feminine roles of getting married and being a stay-at-home mother: this was considered to be abnormal, and doctors pushed for corrective therapy, so that these women would be punished and pushed back to their ‘correct’ gender roles. 1947 was actually the year when the first ever transsexual woman came out in the UK, Roberta Cowell, who underwent the first gender reassignment surgery in the world.
The conceptual distinction between sex and gender, with sex the biological and gender the social category, is something that is even more modern. It differentiates a person's biological sex from that person's gender, which can refer to social roles based on the sex of the person. The earliest example of the two being distinctly separated dates to Simone de Beauvoir’s 1949 existentialist novel ‘The Second Sex’, in which she wrote ‘It would appear, then, that every female human being is not necessarily a woman; to be so considered she must share in that mysterious and threatened reality known as femininity’. This, in fact, has been part of the main argument of many modern-day feminist movements: that being female and being a woman are two separate things which are independent of each other.
The reason why, in the 21st century, people are identifying with the opposite gender more and more, as argued by Dr. Tiemann of theravive.com, is because the distinction between gender roles is becoming more and more blurry. For instance, amongst animals, the males that are aggressive tend to be more successful at winning mating privileges, hence aggression has widely been associated to be a male trait. Women, on the other hand, were forced to take care of their children because of the physical and mental investment they needed to put in in order to produce offspring, thus labelling caregiving as a female attribute. However, with the development of medicine, technology, globalisation and society as a whole, the need for such strict gender roles has strongly diminished: the idea of reasserting your gender has thus become more of a social factor, according to Ruth Rubio-Marin of the New York University School of Law.
The mere possibility for people to go through gender reassignment surgery, or just to emotionally and mentally identify oneself with another gender has led the entire idea to become a social phenomenon, leading to what is called ‘rapid- onset gender dysphoria’. The term was intended to explain some parents’ observations of how their children came out as transgender seemingly suddenly, often during puberty, and that they also had trans-identified peers and interacted with trans- themed social media. According to researcher Debjani Roy, if this is portrayed as common through media, people will be ‘more inclined’ to treat becoming trans as a normal decision. Changing genders has become something associated with popularity, recognition and being dissatisfied with one’s personal life.
The scientific definition of Gender Dysphoria is ‘the distress caused by feeling that one’s body does not match one’s gender’, but, interestingly, it was only called this after being renamed from Gender Identity Disorder (GID), so as to remove the stigma associated with the word ‘disorder’. However, this older and now obsolete term can still help us understand the way in which gender identity was perceived in the past. A disorder is defined by the Collins English dictionary as ‘a state of confusion’, something that can be hypothetically curable with the right treatment, rather than a natural state of being. ‘Gender dysphoria’ is still a disorder, which is a mental health issue; the focus on viewing the condition as something other than a medical condition is thus misguided and leaves us unable to help these people.
Officially, according to the NHS, the symptoms for gender dysphoria can include depression, anxiety and isolation from peers - being comfortable only when in the gender role of your preferred gender identity. However, the NHS also states that for a girl to be described as ‘tomboyish’ or a boy to dress up in his mother’s clothes is completely normal, being just a ‘phase’ of childhood. The problem with this is that there seems to be a very fine line between a child discovering the world around them, and them developing so-called ‘gender dysphoria’: if parents encourage said roleplaying of the opposite gender rather than teach their children ‘stereotypical’ gender norms, he/she is more likely to develop gender dysphoria in the future, according to Julia Serano of Medium.
It is the job of parents to raise their children, and as a result of this, they are the people on whom children will rely the most to understand the role of a male and a female. According to Tim Adams of ‘The Guardian’, if one parent is transgender, the child may associate themselves as transgender as well. Children, despite being human beings and having their own entitlement to opinions and views, are extremely easy to manipulate. They rely on those closest to them, i.e. their parents, to show them what they have to wear, how to socialise and how to behave. It is important for them to get to grips with the traditional roles of males and females before delving into the world of gender identity.
The norm in Western society nowadays is to provide psychotherapy and mental/surgical support for a person’s choice, but some individuals argue that this should be otherwise. Ben Shapiro, a famous american commentator, brings up the point that ‘gender and sex aren’t malleable’. and that the two ‘are not disconnected from each other’. The argument follows that assuming the identity of the opposite gender because of depression, suicidal thoughts or a desire to hide the signs of your sex is not a good solution Many people who aren’t at ease with their gender claim that they are depressed as a result of not being able to come out. However, as after coming out, they many don’t become less depressed.
Therefore, perhaps the problem isn’t that they want to come out, its that they need to be helped to reassert who they are and comforted by psychologists. The attempted suicide rate in the transgender community is 36%, as opposed to 3% in the entirety of the UK, which is a worrying statistic. Of course, this is partly due to the distress and discrimination that people face after coming out as transgender, but it shows nonetheless that the depression that people experience does not go away after coming out. If it is becoming less depressed that people seek, other methods should be considered. Mr. Shapiro suggests that the best way to deal with this problem would be to provide psychological care to people who are feeling at odds with their sex and help them accept themselves for who they are.
At the end of the day, it’s very difficult to change people’s minds on this issue, as it has already become such a big part of today’s society. To say that you do not believe in the possibility of choosing your own gender, let alone are intolerant of it, would come off as uneducated and unwilling to accept a world where times are changing. With these sorts of issues, however, I believe that it is important to question the dogma of the majority, and to have your own opinion.
Perhaps the rise of gender dysphoria is a sign of cultural evolution - one that has the image of equality and a bright future for everyone in mind, but it may just be a step in the wrong direction. These are people who feel that their subjective idea of who they are is threatened by the confines of gender, but perhaps more effort should be put into promoting self-acceptance and the idea that gender doesn’t necessarily restrict your ability to express yourself.
After all that has been said about the topic of gender identity, it is important to realise just how important and life changing a decision such as identifying with a different gender can be. In order to come to terms with this phenomenon in a logical way, we must be willing to not just accept gender identity as it is, but question the reasons behind such decisions. It is simply undeniable that men and women are different from one another, but the world of gender seems to imply that perhaps this should no longer be taken for granted.
What is ‘identity’? It seems like a rather simple word to define when you come upon it. The first thing that pops up in my head is that identity is what makes us who we are.
How do we acquire our identity? Well, I would say I acquired it from my family, my heritage, my culture, and my name. I’ve always believed that identity is innate: you’re born with it and you keep the same one all your life. It seems to be quite straightforward and uncomplicated, but I have struggled with my identity most of my life.
Why? If I was born with my ‘identity’, then it should not be a problem for me because it would have been determined 20 years ago. However, throughout the years, I started feeling alienated from who I thought I was. I found myself stuck between the values and ideas imposed upon me since my younger days, and the new thoughts forming in my head, which sometimes heavily contradicted my previous traditional ones.
This identity struggle surfaced mainly because of my acquired multiculturalism, something that I definitively did not have when I was born.
Arriving to the world in the late 90s, in a Middle Eastern country you’ve probably never heard of, I was brought up in a family that had only lived and experienced their way of life and, therefore, I received a specific set of values, lifestyle, terminology, and way of thinking. By the time I was five, I had no idea there was a whole different ‘world’ out there.
After five years of living with a set, stable identity, my parents found an opportunity to live abroad, in a Western European society. It proved to be an exciting, life-changing experience, but led me to question my identity over and over again.
From the beginning, radical changes interrupted my previous habits. I had to
learn a new language from scratch. Unlike my parents, I became fluent in this complex, European language in about a year. Even the teachers wouldn’t believe it, so they kept making me take language classes to make sure I mastered it and lost any accent I had. Even today, some friends of mine are fascinated by the fact that I have no accent at all when I speak their language. One even told me that I spoke it better than some of the people that had lived there all their lives.
Enrolled in a private, supposedly bilingual school, two to three times a week I had to attend P.E. classes, which were considered as important as Maths or Sciences. They made us exercise different sports in the same week, from baseball to gymnastics. In my school back home, it was the same sport: swimming. To be fair, I welcomed the idea of trying out different activities, but what filled me with dread was the idea I associated with it. Everyone, boys AND girls, had to change into their sports clothes in class. I get it, you must be asking yourself, why was that a big deal for me? Because I was always taught to dress in a certain way, in order not to show too much skin. My parents had taught me that my body was very precious, and that I should not expose it to the gaze of others, especially not males. There was a vulnerability, an innocence associated to my naked body. Therefore, before each sports class, extremely uncomfortable, I would look for a corner to hide or ask to go to the toilets. My classmates already found me strange so this was something else they could add to their list.
I also had to eat at the school canteen every day. For someone that ate their mum’s (delicious) homemade foods most of the time, the dry taste of the ‘puree’ with cold carrots was not appetising at all. But the taste was the least of my problems. My daily struggle was trying to figure out what I was eating, i.e. making sure it was not pork. Once, I asked the woman who was serving the food what type of meat was being served and she told me it was lamb. Delighted, as most of the time I could not eat half of the food in my plate, I sat down, ready to indulge in that lamb. After chewing it for a couple of seconds, I still did not recognise the taste of lamb. I asked the people sitting next to me if they knew which type of meat this was – a very frequently asked question – and one of them answered ‘It’s pork’. Confused, I stood up and went back to where I got my food. The other lady was not there so I asked another one, who confirmed that it was not lamb.
I had no choice but to welcome all these new changes into my life. However, what I hadn’t realised is that this would lead me to constantly ask myself whether I had to forget about my past routines to adapt to this new lifestyle. As you could have imagined, there was no one like me in my class, or even in the whole school. Even if it prided itself for its multiculturalism, there was only one Asian and a couple of Indians in my grade. Most of the pupils were blonde, blue-eyed and really skinny, contrasting with my dark, curly hair, protruding brown eyes and thick eyebrows. I had some friends, but couldn’t truly relate to anyone.
Another struggle was my name, a vital part of my identity. Something that never failed to make my stomach turn was when the teachers called our names from the list in each class to ensure we were present. They all spelled my name wrong for the first couple of weeks, but each year we had different teachers so at the beginning of every school year it would be the same process all over again. By the time I was eleven, I was writing my name a certain way to avoid teachers saying it name wrong and pupils making silly puns about it. I hated my name to the point that I asked my parents if I could change it to a more common name like Jessica or Clara. All I wanted was to have a ‘normal’ name like the other kids in my class so that the teachers would say it right. They (obviously) refused.
As a minority, I kept comparing myself to the majority of the people around me. I felt like an alien, an error that had to be fixed because it wasn’t like the rest of the crowd. I was trying to find a way to reconcile two incompatible cultures, which gave rise to this identity struggle I’m still facing today. It felt like a choice had to be made between the two; the Western identity I acquired through time seemed to be taking over and was easier to adhere to since most of my surroundings complied with it.
Growing up, I slowly started to separate my identity at school, with friends, to the one at home that I shared with my family. I didn’t want to hurt them because I knew that there were certain things they would never approve of, but at the same time I wanted to do things my own way. For instance, I was not allowed to have any male friends until I was much older. I did speak to boys at school, but hid it from my parents and could never invite them to birthdays. My father still gets angry when I bring up boys, while bringing boyfriends over to sleep and going on holidays with male friends was acceptable for my entourage’s parents without going through a heated debate like I would have had to. I didn’t enjoy having a sort of ‘double identity’, but with time I learned that I didn’t have much of a choice if I wanted to preserve both of my cultures harmoniously.
Sexuality for me was also quite confusing. In high school, they regularly brought in psychologists and experts to raise awareness about it. They all told us that it was perfectly normal and welcomed to have sexual relationships, but informed us about the need to use protection. I barely listened at first because the phrase ‘No sex before marriage’ was ingrained into my brain. But I could not understand why my family and my school told me completely different things about sexuality, and many other issues. This is mainly where my identity problem comes from. Do I have two separate identities, or a single one that is torn between two polarised ways of life? I wish I had an answer to this extremely complex question, but I still do not.
What made things harder is that, on one hand, I felt like my home was rejecting me – telling me that I was not like the people there anymore, I did not master the language as well as before, I no longer dressed like my people and even disagreed with some inherent values. But on the other hand, the Western society I was living in refused to accept me, as I wasn’t like them and could never be. I didn’t look like them, I had a different name to theirs, I ate different foods and just didn’t see things in the same light as them.
A lot of people started to take interest in my background when a wave of terrorist attacks hit Europe, accentuating the negative perspective the West has of my ethnic background. Some people I had never talked to asked my views about terrorists’ views and wearing the hijab, as if that was what my whole culture could be distilled to. They loved voicing how hijabs and burqas deprived women of their freedom, but completely disregarded the beauty and uniqueness of my heritage.
Being faced with different cultures throughout my life has made me much more open-minded, which I never could have been if I stayed in my home country. I’m proud of being multicultural because it gives me the ability to see things from a broader perspective. But I still feel confused sometimes when people ask me where I’m from and where I belong. Because I move a lot, I don’t really have a ‘home’, a fixed place of comfort and ease that I can think about going back to when I’m feeling low. However, it did get better with time, mostly when I went to an extremely diverse university in London.
Being surrounded by such a diversified environment, I started realising that I might not be the only one going through this identity struggle. Especially in a big city like London, where different cultures coexist and are celebrated through student unions,
cuisines, cultural celebrations, it is more likely that people will be accepting of others. Although I did meet some individuals with radical, xenophobic views, the general atmosphere is that of cohesion and acceptance. This has incredibly helped me.
The previously constant feelings of not belonging anywhere and not being understood are no longer a part of my everyday life. Of course, I am still building my identity now – as are most people my age with similar experiences, I’ve learned! – but I will no longer let peer pressure affect my relationship with my past, my heritage, or my identity. You should not settle for the image society paints of you; feel free to stand out from the crowd, whatever it takes. What matters is how you perceive and define yourself. Back in school, I always wanted to be like the people in my class, but I’ve come to realise how much more interesting my differences make me. Today, I feel immense pride when I am brought to explain my dual heritage and background, because it is the complexity of my path that built the identity I have now.
When they met publicly for the first time in 1868 to discuss a cause advocated by John Stuart Mill, the father of nineteenth-century liberalism, they were but an insignificant group whom nobody imagined bringing change to Great Britain. Soon, they became martyrs for their cause when their government responded with violence and harassment. Fortunately, before they could become dispirited, the Great War, however tragic otherwise, brought them victory through the Representation of the People Act 1918, with further extension of the franchise in 1928.
Little did they know that it was just the beginning of their fight to be acknowledged in a man’s world. For the past 150 years, women in the UK have been fighting for their equality to men. But those who desired social progress have together managed to secure a legal framework for equality at work through employment law legislation, further supplemented by the Human Rights Act 1998 and EU Directives. The questions now are: how much more campaigning does the cause need? How extensive a legal obligation is enough? And when does social advocacy turn into a social justice war?
Possibly the largest and newest survey regarding the position of women in the solicitor profession was published in March 2018 by the Law Society of England and Wales to celebrate International Women’s Day. It included 7,781 respondents, who were predominantly women (5,758). One of the questions referred to the ‘unconscious bias’ within the legal profession that stops women from achieving higher positions with 52% of replies confirming it existed. It raises several issues.
First of all, what exactly is ‘unconscious bias’ and do all of us understand it in the
same way? How do we distinguish between general feelings of antipathy, which are natural, and bias caused by a protected personal characteristic, which is discriminatory? In the studies of diversity in the judiciary it has often been noted that ‘male, pale and stale’ judges can easily project their life experience on potential judicial candidates, therefore creating unconscious bias by choosing their lookalikes. This was however a concern of the ‘tap-on-the-shoulder’ pre-application system, which is not what happens nowadays in law firms, where partners, including female partners, vote, or in the judiciary, with the Judicial Appointments Commission set up.
Secondly, although the most curious discrepancy is the fact that undergraduate law degree entry trends betray a significant difference in the number of women entering the course (two-thirds) to the number of female partners (about one-third), that simple comparison ignores the complexities of life. Not all undergraduate law students will go on to work as a solicitor in the UK. There simply aren’t enough training contracts for everyone. In 2016/2017 there were almost 90,000 law students enrolled, as compared to the 175,000 qualified solicitors in England and Wales, which is equal parts men as women. Moreover, not every solicitor on the roll is in practice and not every one of them will have the desire to keep working their way up the ladder to achieve partnership.
Fortunately, the latter was recognised by the survey and thus more specific inquiries were made about the poor work/life balance (confirmed by 49%) and lack of flexible working practices in law firms (confirmed by 41%). Unfortunately, the fundamental – and gender-discriminatory – flaw in this thoughtprocess is that these factors influence only women. That is, the survey acted as if only women hoped for more in life than hard work, wanted to create a family, and spent their free time with loved ones. Christina Blacklaws, the then Vice-President of the Law Society, claimed that those features drove women out of the profession. Certainly, time and work commitments, when they are too strenuous, do force us to rethink our careers or reorganise our lives. Yet it seems that we are turning the ideology of equality on its head, determined to prove that female lawyers are experiencing gender discrimination.
The following is only one of many glaring examples of current feminist doublethink. On the one hand, there are complaints about non-accommodating working practices that promote male-oriented routes to the profession. On the other, a study on parental leave policies within law firms in Helsinki and Montreal, conducted by Marta Choroszewicz and Diane Gabrielle Tremblay, suggests that a more flexible approach towards mothers champions male supremacy because it sets back women’s careers by offering them longer maternity leave. To what extent are these policies influenced by the fact that women actually give birth to children and need to recover, emotionally and physically?
Finally, it is worth noting that, just as some men decide to renounce family or social life, it is only reasonable that the same be expected of women who want to lead an astounding career. On that note, however, there is some evidence of continued compensation inequality that requires a brief mention. A survey carried out by legal recruiter Major, Lindsey & Africa shows that female partners earn 24% less than men, although only 15% of the partners surveyed report the existence of gender bias in their firms. The reason for the pay gap, therefore, might be more prosaic: on average, female partners bring 10% less money to the firm. Moreover, female promotions to the partner level have only recently picked up, hence their earnings might still remain at the lower end of the compensation spectrum.
A cursory look at the Bar reveals some appalling statistics. To share but a small number, only 28% of court judges were female, including 21 out of 97 judges in the High Court, 9 out of 38 in the Court of Appeal and the three female Supreme
Court Justices, Lady Hale, Lady Arden, and Lady Black. The latter is commonly compared with the Australian High Court (43% of female judges) and the Canadian Supreme Court (44%). These numbers show that the plague of bias really had been troubling the profession before the JAC was was set up by the Constitutional Reform Act 2005 to create a formal appointment system.
Current judges still speak fondly, although amazed, of receiving a pupillage in the pre-Pupillage Gateway times. It required no strenuous applications and formal interviews but rather was a networking, conversational process during which the candidate either presented himself as likeable or not. Of course, if he were to be likeable, he had to be similar to the panel and therefore white, male and Oxbridge-educated. Nowadays, championing equality, fairness and diversity, barristers commonly admit it was not the correct approach. The message has come through, and the vehicles of mobility for women in the barrister profession have been set in place.
Those spreading the idea of the Bar as an exclusive rich boys’ club seem to have forgotten that diversity cannot simply be switched on. Policies take years to implement and decades to take effect; legacy is not erased but must wither away and be replaced. The average UK Supreme Court Justice’s age right now is 68.4. Half a century ago, women were not encouraged to become barristers, whereof we can see the result. Luckily, the commitment to diversity has since only grown, and the entry statistics show that change has already taken effect at the entry levels, where it can be reasonably expected. As of 2016/2017, there are now more female pupils (255 to 216) and more female lawyers called to the Bar (625 to 559) than male.
Chambers and Inns regularly organise workshops and events directed specifically at women. Lady Hale, who has for a long time been an advocate for women at the Bar, has been chosen as the new President of the Supreme Court. The Bar Council is working tirelessly to implement parental leave policies that would help barristers support themselves and rebuild their practices, yet barristers technically remain self-employed. This discourse again is discriminatory, evoked mainly in the context of women, although clearly fathers have no easier financial situation if they decide not to work or to work less and spend time with their children instead.
Thus, the facts show that there are multiple schemes on both sides of the profession for combatting inequalities which might still exist. The reviews of the strategies give a positive feedback on their implementation. In 1975, an official legal obligation was put on employers that protected, inter alia, against gender discrimination; today it continues its existence in the Equality Act 2010 - employment policies have been and are being changed. That being said, women nonetheless feel the need to advocate for their place in the legal world and some even go as far as to claim that the legal profession still displays a gender bias.
Perhaps it is yet one more attitude adjustment that needs time to occur and we will see the movement wane as the problems are being solved. But perhaps we have entered the age of attention-seeking social activism at all costs, whether there is a real purpose to it or not. After all, as men see the world turn its back on them in order to appease the other gender, they start to raise their voices as well. No longer in order to help the fight, but rather to prove that the discourse of disadvantage is now on their side.
NO SONG LEFT FOR THE STAR SPANGLED BANNER
In 2018, popular campaigns arose to criticise fundamental aspects of the American society. From questioning the embedded racism of constitutional structures to the prevalent disillusionment with US military actions, the American public have used their voice to demonstrate their concern with the administration of government action. Have these actions provoked a crisis in the American identity?
Part 1 – The American History and its Identity
In 1787, the Constitutional Convention was founded after the failures of the Articles of Confederation in maintaining a functioning democracy. The formation of a new national government with 3 different branches was popular among the people of America and was ratified in July 1789.
The freedoms associated with a codified constitution and a checks-and-balances system are preserved by Americans. To have fundamental rights and freedoms carefully protected by a constitution has led the state to be dubbed a ‘free world’. Yet the country was built respectively through the genocide Native Americans and the African slave trade. The ‘free world’ was a world that was not entirely free at all.
The country has also enjoyed its status of being an economic superpower. Contributing to globalisation (which some critics argue as ‘western imperialism’) as the leading state in nominal GDP. States around the globe hold the US dollar in currency reserves as the worldwide currency to trade. Around 2/3 of currency reserves are held in dollars.
However, the dependence of the American dollar has allowed America to become politically influential on the world stage, often using their economic might as carrot and stick. This is a theory devised by Dr. Amira Jadoon, who co-authored with Bryan Early an article in Foreign Policy Analysis entitled ‘Using the Carrot as the Stick: U.S. Foreign Aid and the Effectiveness of
Sanctions Threats’. Not only is foreign aid used in an altruistic fashion but, as Jadoon and Early present, the threat to withdraw it has led to the US wielding economic might with equal parity to military power.
No other country or culture has enjoyed cinematic success as the Hollywood scene, but it is a Caucasian-dominated enterprise which does not pay homage to a country which has been shaped by different ethnicities. A survey by Omnibus presents that only 20% of people believe that Asian and Hispanic groups are represented in film. Furthermore, only 18% believe that the LGBTQ community are equally represented.
Accelerated by the technological advances of media, the US experienced profound socio-political changes. For example, the American Civil Rights Movement has led to an American society largely self-conscious of social inequality and injustices suffered by minorities due to the colour of their skin.
Nevertheless, there are still inequalities in the American justice system. It is a country where a white 20-year-old male can walk free after a six-month sentence for raping an unconscious woman. But the same leniency for a promising sportsman was not extended to an aspiring black male who was falsely accused of rape.
Brian Banks’ prospects for the National Football League were shattered when his accuser fabricated an allegation of rape against him. He was only 16 years old when he was tried as an adult and sentenced to a five-year prison sentence. Corey Bate was also 19 when he, like Brock Turner, was convicted of raping a classmate at Vanderbilt University. However, unlike Turner, Bate received a mandatory minimum prison sentence of 15 to 25 years.
This article does not seek to challenge whether Bate or Turner are deserving of their prison sentences but rather proposes that the lack of uniformity in the judicial system is another part of the Americanidentity. It is an accepted thought that skin colour, an intrinsically unchangeable characteristic, can be used as currency. Some ‘currencies’ are preserved, protected even. But other ‘currencies’ are viewed as worthless compared to those with more purchasing power.
The difficulty to categorise the American identity is much like the difficulty of categorising a human being. In every person there are elements of good, characteristics of fault, and the opportunity for change. A country with over 250 years of history, even before ‘discovery’ by colonisers cannot be described in one article.
However, several points of distinction can be proposed. The US has enjoyed a status of being a political and economic superpower but lacks the ability to address the injustices which stem from systematic and judicial dismissal of minority groups. Why is the country, although incredibly advanced in scientific research and economic influence, ignorant when it comes to addressing social concerns?
America is arguably the leading western state in the world but the fact that it is a
country fraught with unequal treatment is a concern for its future in terms of its influence as a soft power. Why should other states choose to pay attention to the US when it has undermined itself by not bringing instrumental change to combat social injustice?
The American identity, as presented, is thus flawed and inconsistent with a country whose national anthem declares it ‘the land of the free and the home of the brave’.
Part 2 – A Changing American Identity
NFL Kneeling Protests
A mere inaction led to one of the greatest reactions in sporting history. This occurred during a pre-season game between the San Francisco 49ers and the Green Bay Packers. Colin Kaepernick, SF 49ers’ quarterback, did not stand for the national anthem in a demonstration ‘to stand up for people that are oppressed’.
Despite previous failures of athletes (such as Mahmoud Abdul-Rauf, John Carlos and Tommie Smith) using their sport as a platform for activism, Kaepernick’s campaign has produced tremendous results for the sporting world. His demonstration of athlete activism exploded on social media with fellow sports stars and individuals supporting him on his venture. Nik even offered him an advertising campaign as part of their 30th anniversary of the ‘Just Do It’ slogan.
Despite the negativity surrounding the campaign, NFL players now continuously kneel during the national anthem in support for Kaepernick and his campaign. The unified display of activism demonstrates the evolution from censorship of athletes to support for individuals who use their influence to voice out their political concerns.
This change also has subsequent consequences for the American identity. Perhaps Trump is correct in calling the NFL kneeling ‘unpatriotic’ due to the refusal to stand up for the Star-Spangled Banner. But the overwhelming support for Kaepernick shapes a society recognising the significance of individuals’ actions and their sacrifice to demonstrate for political injustice and racial inequality. Surely protesting for political change in the country is just another form of patriotism?
‘FUCK TRUMP’ – The Most Unpopular Administration in History?
No presidency has been hit with more controversies than the Trump Administration. Firstly, his famous campaign slogan to ‘build a wall’ along the border of Mexico and the United States was stopped due to Congress refusing to fund the $25bn construction of the wall.
In addition, the idea was halted due to the rise of other problematic issues such as the backlash against healthcare reform. In particular, women’s access to healthcare regarding contraception and abortions have now been limited by the American Health Care Act (‘Trumpcare’), provoking a response by groups such as Planned Parenthood to call it ‘the worst bill for women’s health in a generation’.
Moreover, Trump’s immigration policies have resulted in negative scrutiny by individuals. For example, take the immigration ban on 7 countries which have a Muslim-majority population. This discrimination on individuals also extended to the deportation of undocumented immigrations. Around 11,000 military families face the threat of one of their family members being deported. Despite serving the country.
Indeed, American Families United, a non-profit advocacy group focused on immigration, has found that 6.3% of 129 million married Americans have foreign- born spouses. In addition, it has been estimated that around 25% of those spouses have entered the country as an undocumented citizen, thus meaning that 2 million spouses are facing the risk of deportation. Children have also been harmed as a result of these deportations, with almost 2000 children being separated from their families.
Lastly, Donald Trump has also faced problems including claims of electoral rigging from the Russian government and allegations of sexual assault against him. For example, the Silence Breakers and the #MeToo Movement have focused on the naming of celebrities and significant individuals who have committed acts of sexual assault. Around 20 women have accused the President of sexual harassment.
The troubles facing the Trump administration have undoubtedly reduced the American public’s confidence in the American identity. Although the President of the United States is regarded as one of the most powerful positions in the world, the negative press surrounding Trump has led to a campaign of protests on his standing as a president and his administration’s policies. It is a time where lobbying groups have started to target the specific failures of the administration, criticising the administration for its controversial policies on women’s healthcare and immigration.
America’s Global Position – A Diminished American Identity
The United States has enjoyed the success of being one of the leading states on the global stage. Accordingly, America has been named as a uni-polar power, being the most dominant state in the world.
Indeed, the US benefits from influential structural power such as being a member of the Security Council of the United Nationswhich means that it has the privilege of a veto on the affairs of the United Nations. In addition, the military and nuclear might of the United States far outweighs other states, being the first country to develop nuclear weapons and having a total army strength of over 1 million soldiers.
However, the United States have been criticised for unnecessary military intervention. For example, the US and the United Kingdom, against the wishes of the United Nations, invaded Iraq in 2003 with the alleged belief that the leader of Iraq, Saddam Hussein, was developing weapons of mass destruction.
Although America has the largest economy in the world, $16 trillion, it is suffering an economic stagnation with a GDP growth rate of only 1.6%. This is far less compared to the BRIC (Brazil, Russia, India and China) countries. Not only is the US economy stagnating but its accumulated debt is $17 trillion- the highest in the world.
Although the United States has enjoyed the prestige of being the leading Western superpower in the world, its economic power is stagnating, and their military interventions have become increasingly questioned for ulterior motives. The two most prominent features of the superpower have diminished and have been criticised. Is it right to still call the United States a superpower?
The American identity is under threat from an unpopular administration and economic factors. However, in a more positive light, it has evolved to speak out against the Executive through the increase in more politically significant demonstrations against policies and systemic injustices.
T o combat an increasingly unpopular American administration, the American public have led with different movements fighting different causes.
Most of us like to think that we put the best version of ourselves online, but we’ve all seen questionable tweets, pictures and videos that make us grimace - ‘what will this individual’s future potential employers think?’. But why should a future employer feel entitled to stalk an individual’s social media? Or what are the effects of cancelling a celebrity for their 4-year-old actions?
There is a wealth of research which delves into the effects of computer mediated communication on social standards. Communication on social media tends to be much freer and more impersonal than in real life, due to the absence of social norms and cues which regulate our face-to-face interactions. It creates a sense of deindividuation: although you may not be anonymous while posting from your Twitter account, it is likely you’ll experience reduced self-regulation and self- awareness; this encourages people to think that they will not be held accountable for their opinions.
The past can and has come back to haunt people, including Youtube celebrities. thanks to the common culture of ‘cancelling’ that has arisen in recent years; if someone is ‘cancelled’ it means they have been (rather dramatically) virtually annihilated and they no longer matter in social media culture. As fans and followers are vital to a celebrity’s relevance, withdrawing their support can be devastating for that celebrity’s career. The example of Laura Lee is one which encapsulates the influence fans can have on a celebrity’s success. Earlier this year her career was flourishing, with 5 million YouTube subscribers and a successful makeup line, but it all came crashing down when racist tweets she published in 2012 resurfaced.
Something about actively searching for tweets and social media activity from a famous person’s past in order to humiliate them strikes me as strange. No doubt we have all said - or perhaps even posted - something in the past that would not go down well in today’s society, in which people are more aware of the repercussions of their actions on social media.
According to a 2016 ComRes survey for the BBC, a fifth of British children aged 10 to 18 admit they have said something unkind about or rude to someone on social media. It would be absurd and quite abysmal to contend that because of some untoward comments that someone may have made in 2012 they now deserve the character assassination and defamation that some unlucky Youtubers and influencers have received. People change a during the course of six years. Society also changes massively, and what is deemed acceptable and what is censored develops with the passage of time.
It is far more important that the problem is tackled at its roots; we should instead look at the influences on people’s behaviour from real life. When Stormzy used the word ‘gay’ as an insult in a tweet, it was also being used extensively in real life to insult others. When various celebrities were making racist jokes and tweeting the n- word, the popular TV show Little Britain featured Matt Lucas dressed in blackface. By considering the influences on people’s social media activity from real life, it’s clear that this is a much bigger problem than bad people saying bad things on social media.
Regular people have also been unlucky enough to find that their opinions, complaints or even old photos have spiralled out of the private social media realm and into the view of their bosses. That someone’s social media activity could cost them their job should worry us all. Let us not forget that we have a natural right to privacy under the Human Rights Act of 1998 which provides a person with ‘the right to have one’s private life respected’. Just like an employer wouldn’t follow a potential employee in real life and eavesdrop on their private conversations, it should be the case that this isn’t done on social networking sites in the name of background checks.
The idea that ‘whatever is posted online is public’ should never be an excuse to create an environment of censorship and to discourage people from giving their honest opinions in the fear that it may cost them the economic necessity of earning a means of living. Not only does it suppress freedom of thought, but it also has the effect of stifling creativity in the workplace. Research by McKinsey has shown that teams with greater diversity are 35% more likely to see financial returns which are above the median for their national industry. If ideas and perspectives are not diversified, this decreases the sources from which ideas
can be drawn from, therefore meaning that boring and repetitive concepts are more likely to be generated in the workplace.
Data by Ruut Veenhoven conducted across 126 nations found a strong positive correlation between happiness and political freedom. The most socially progressive society is one in which there is scope for debate. The most socially progressive society is one where people with unconventional opinions are not driven into the ground, because being shut down does not mean they’ll stop having opinions. It simply means they’ll take their opinions to somewhere where it isn’t visible. And that’s more dangerous because it eradicates the chance to engage in discussions and critique questionable opinions.
In conclusion, it is almost never justified to judge people based on their social media activity. The effects of employers carrying out social media checks can be dangerous; it could endanger a promising mode of communication, making it inferior and perilous for people to use. Rather, it is healthy and mutually beneficial for both employer and employee that there remains a barrier between an individual’s private life and their corporate life. We should judge people based on what we know them to be like in real life, and if we don’t know them in real life it makes more sense to refrain altogether from making assumptions about strangers online because social media is only a tiny fraction of a person’s identity.