This is an excerpt from The Praeter-Colonial Mind: An Intellectual Journey Through the Back Alleys of Empire by Francisco Lobo. You can download the book free of charge from E-International Relations.
As we saw in a previous chapter, the great novelist of the age of empire, Joseph Conrad, once compared colonialism (at least its idea) to the advance of light against the receding darkness (Conrad 2022b, 107). The light in this admittedly problematic metaphor represents progress, ushered in by science and knowledge – that is to say, by data. Those who possess more of it (science, knowledge, data) are better off than those who have very little or none. An asymmetry of information, therefore, arises, similar to the ‘epistemic asymmetry’ that according to Oxford Professor Amia Srinivasan exists between teacher and student (Srinivasan 2022, 131). In this chapter I want to address a concerning trend of our time, one that is huddling all of us together and placing us at the vulnerable end of an epistemic asymmetry between humanity, on the one hand, and Artificial Intelligence (‘AI’), on the other. As Pete Buttigieg has recently remarked: “the terms of what it is like to be a human are about to change in ways that rival the transformations of the Enlightenment or the Industrial Revolution, only much more quickly (Buttigieg 2025, para. 4).”
Powered by the winds of our own aggregate data, the ships of AI are fast approaching our shores rendering us as vulnerable as the Aztecs or the Māori were in the eve of the first contact with their European conquerors: ‘As AI now arrives on our proverbial shores, it is, like the conquistadors, triggering whispers both of excitement and of mistrust’ (Kissinger et al 2024, 84).
This is one of the greatest challenges the praeter-colonial mind can face, namely the fact that we, as humans, are potentially about to become a subjugated species by means of our own making and data coming out of our own minds. In a future where data is power and the form of intelligence that best manages it is king, the praeter-colonial mind will struggle to make sense of the fact that it can be subdued by its own knowledge. Further, as we are still grappling with the many legacies of our most recent experiences with colonialism from the past five hundred years, some of us are even ushering in this neo-colonial future into the present without much reflection.
The Colombian novelist and winner of the Nobel Prize in Literature, Gabriel García Márquez, opens One Hundred Years of Solitude with the tale of a man facing a firing squad – a man whose last thoughts take him back to when his father took him to see ice for the first time as a child (García Márquez 2017, 13). The strange substance was brought to them by a company of travelers (‘gitanos’ in the novel), who specialized in entertaining the locals with all kinds of rare objects and artifacts from foreign lands – not just ice, but also magnets, magnifying glasses, astrolabes, telescopes, and the like. Their leader, an enigmatic and good-hearted man named Melquíades, told the locals as he demonstrated how magnets work: ‘all things are alive inside – it is only a matter of awakening their spirit’. Similarly, as he amused the villagers with a telescope, he would declare: ‘science has eliminated distances. Soon, man will be able to see what goes on in every corner of the earth without leaving home’. Melquíades was not wrong, and he was indeed talking about scientific accomplishments, both present and future. Yet, he was not a man of science himself. None of the travelers showcasing these technologies were. All they needed was a basic understanding of how things worked so they could demonstrate to anyone unfamiliar.
It is the similar level of knowledge we all possess when approaching any piece of modern technology. Take, for instance, your own phone. You are fairly confident you can explain how it works to a stranger, maybe even teach them a few tricks or amuse them with one or two novel functionalities. Yet very few of us can open our phones and fix whatever might be wrong with them. We would probably take it to a specialist, an expert in the technology and the science that goes into making it.
What the travelers of Macondo resemble – and, for that matter, most of us when it comes to science and technology – is what is known as the ‘sorcerer’s apprentice’. In his latest monograph on AI, titled Nexus, Yuval Noah Harari recalls Goethe’s poem about a sorcerer’s apprentice (popularized by Disney’s Fantasia) who enchants a broom to do his work for him. Before long, things get out of hand when the broom carries so much water into the lab that it threatens to flood it, the apprentice panicking and hacking the broom with an axe only to find it splits into more and more autonomous brooms relentlessly continuing the task for which they were ‘programmed’. Harari quotes Goethe directly (‘The spirits that I summoned, I now cannot rid myself of again’), thus reaching a sobering conclusion in the Prologue and setting the tone for the rest of the book: ‘The lesson to the apprentice – and to humanity – is clear: never summon powers you cannot control’ (Harari 2024, xii).
And we may add to Harari’s prescription: never summon powers you cannot control, and that you do not understand. Indeed, in a recent interview with CNN, Judd Rosenblatt, the CEO of an AI company named AE Studio – which developed an AI software that during the testing phase started blackmailing some of its human users – somberly confided that:
as AI gets more and more powerful, and we just don’t actually understand how AI models work in the first place – the top AI engineers in the world who create these things – we have no idea how AI actually works, we don’t know how to look inside it and understand what’s going on, and so it’s getting a lot more powerful and we need to be fairly concerned that behavior like this may get way worse as it gets more powerful (CNN 2025a, at 01:39).
This CEO’s concerns echo those of plenty of people working with AI models in the private sector. In 2023, the Future of Life initiative issued a public letter, signed by the likes of Elon Musk and Harari, with the following exhortation:
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. (…). If such a pause cannot be enacted quickly, governments should step in and institute a moratorium (Future of Life 2023, para. 1).
Similarly, the Center for AI Safety in San Francisco conveyed a similar message in 2023, endorsed by several scientists, including Bill Gates: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war’ (Center for AI Safety 2023). Government reaction has been slow to come, other than in the form of some policy initiatives such as the Hiroshima AI Process (European Commission 2023) and the Bletchley Declaration (UK Government 2023). Some germinal legislation and regulations have been enacted in the US (White House 2023) and the EU (European Commission 2024), and even Pope Leo XIV has warned of the dangers of AI (Watkins 2025).
However, at the end of the day, scientists, governments, and all the rest of us seem to be no different from the Macondo travelers presuming to understand technologies that have come to our possession but which we are not entirely sure we can control, not least the ‘new electricity’ that is known as AI (Ng 2017). That means that humanity as a whole is vulnerable to this new technology, just as we all are vulnerable to pandemics or climate change. In what is the last chapter of the first part of the book, I will explore the implications of this new threat to our kind, one that brings us together in the greatest possible huddle we can be a part of, humanity.
Take Me to Your Leader
Admittedly, humans have been scared like this before. So often, actually, that fear can be described as one of the main drivers in human history. From natural disasters and ferocious beasts to epidemics, celestial bodies, and even the planet itself (‘It’s flat! We’ll fall off!), humans have been scared from times immemorial.
Humans excel at being afraid, particularly of other humans. Whether it is the color of their skin, the language they speak, the gods they worship, the technology they possess, or all of the above, humans have struck fear in the hearts of other humans also since times immemorial. At times, though, the threat has come from other intelligent creatures. Take Neanderthals, for instance. These hominids became extinct, many scientists think, when they came into contact with modern human beings, or homo sapiens, some 40.000 years ago. There is very little we know about their ways and what they thought of us humans, as they did not leave a written record before their extinction.
A notable exercise in pre-historic empathy in this regard can be found in the 1955 novel The Inheritors, written by William Golding, also author of Lord of the Flies. Golding narrates events from the perspective of a tribe of peaceful Neanderthals leading an idyllic life who encounter a group of violent humans who attack them and steal their infant. In the last chapter, the only one telling the story from the perspective of the humans, Golding describes them as forever ‘haunted, bedeviled, full of strange irrational grief’ (Golding 2012, 202). We got rid of these intelligent competitors on this planet thousands of years ago. But what if a new form of intelligence suddenly appears?
Harari characterizes AI not as a tool but as an ‘agent’, since it has the potential to become an independent entity that might ‘accomplish goals which may not have been concretely specified’ or trained (Harari 2024, xxii; 203). Accordingly, he concludes that this kind of entity that can make decisions and come up with new ideas by itself indeed qualifies as ‘alien intelligence’ (Ibid, 217), making ‘alien decisions’ and generating ‘alien ideas – that is, decisions and ideas that are unlikely to occur to humans’ (Ibid, 399).
Harari’s ideas may sound like science fiction, of the kind that is arguably today fueling the dark fantasies of killer robots and advanced AI machines wiping out or enslaving humanity and taking over the planet – and beyond, as Asimov’s famous I, Robot series depicts with stories about space colonies run by doomed humans and increasingly self-aware machines (Asimov 2004).
One particular piece of classic science fiction that addresses the topic of humans encountering alien intelligence is the 1957 novel The Black Cloud by Fred Hoyle, an astronomer and mathematician who took it upon himself in the 1950s to write a ‘frolic’ for his scientific colleagues in which ‘there is very little (…) that could not conceivably happen’ (Hoyle 2010, 5). It tells the story of a mysterious outer space black cloud that approaches Earth at speed only to decelerate and finally engulf it in darkness. Interpreting its peculiar and apparently self-aware behavior, scientists theorize that the gas cloud actually possesses intelligence and they figure out a way to communicate with it (through elementary electric signals), as both the cloud and humans are ‘constructed in a way that reflects the inner pattern of the Universe’ (Ibid, 199). ‘Intelligent life’, they conclude, amounts to ‘something that reflects the basic structure of the Universe’ (Ibid).
What they discovered as they interact with the cloud is that it is infinitely smarter than humans, even though it does not seem to show hostile intent. Nonetheless, and as expected, the military get jumpy about its potential to wipe out all life on Earth by blocking sunlight, and plan to strike the alien entity with nuclear weapons. The scientists get wind of such plans and decide to warn the cloud (as the ‘humane’ thing to do) because they believe that this superior form of intelligence is decent based on its restrained behavior, given the enormous amount of energy at its disposal. When one of the characters asks, ‘why should it bother?’ if it destroyed humanity, another replies ‘Well, if a beetle were to say to you, “Please, Miss Halsey, will you avoid treading here, otherwise I shall be crushed,” wouldn’t you be willing to move your foot a trifle?’ (Ibid, 179). Ultimately, the cloud deflects the missiles fired at it, acting in self-defense, yet killing thousands on Earth as a result. However, it does not escalate further. Eventually, it moves on to continue its exploration of the universe.
The missiles redirected by the cloud end up crashing back in their original launching sites in places such as El Paso (Texas), Chicago, and Kyiv – corresponding to the two main nuclear powers of the 1950s, the US and the USSR. Not even the most powerful nations at the time are spared the ‘Solomonic’ justice of the cloud, a sobering reminder that, big or small, comparatively powerful or weak, no community is safe when a superior colonizing force arrives.
Digital Colonialism
Although tales of science fiction tend to portray new technologies or alien intelligence as a threat to humanity as a whole, some today are worried that AI might actually exacerbate the existing inequalities between humans who live in an imperfect, post-colonial world. As Harari writes:
the power of AI could supercharge existing human conflicts, dividing humanity against itself. Just as in the twentieth century the Iron Curtain divided the rival powers in the Cold War, so in the twenty-first century the Silicon Curtain – made of silicon chips and computer codes rather than barbed wire – might come to divide rival powers in a new global conflict (Harari 2024, xxi).
Furthermore, Harari warns us about the perils of a new form of digital colonialism that should make the praeter-colonial mind ill at ease:
the Silicon curtain might come to divide not one group of humans from another but rather all humans from our new AI overlords. No matter where we live, we might find ourselves cocooned by a web of unfathomable algorithms that manage our lives, reshape our politics and culture, and even reengineer our bodies and minds – while we can no longer comprehend the forces that control us, let alone stop them. If a twenty-first-century totalitarian network succeeds in conquering the world, it may be run by nonhuman intelligence, rather than by a human dictator.
(…)
Instead of dividing democracies from totalitarian regimes, a new Silicon Curtain may separate all humans from our unfathomable algorithmic overlords (Ibid, xxi; 190).
Similarly, Henry Kissinger postulated in his last book (co-authored with Craig Mundie and Eric Schmidt) that the sovereign nation-state might not be an organizational unit suited for the age of AI (Kissinger et al 2024, 129–130). Others believe that ‘silicon sovereigns’ of sorts are becoming increasingly important, that is, the so-called tech-industrial complex of private companies developing AI and taking economic and political power away from governments (Chesterman 2025). This is already happening in the form of a new ‘cloud capitalism’ whereby users are indebted to their digital overlords in a Faustian bargain of goods and services in exchange for personal data (Varoufakis 2024). Further, AI companies benefit from new-old ways of exploitation by outsourcing data analysis to an underpaid digital proletariat located in the developing world, such as Kenya and Colombia (DW 2024). And if these newly anointed sovereigns of the digital age refuse to self- regulate, then it will come down to us, the users, to do something about it by not supporting companies that ignore safety or exacerbate inequality (Chesterman 2025, 21).
What if AI decides to behave like a decent, benevolent overlord (like Hoyle’s black cloud) and actually tries to help the huddle of humanity solve all of its problems? What if AI finally finds the answer to climate change, the cure for all diseases, and the formula to end all war? Will we listen? This theoretical scenario might become, someday, a true dilemma for the praeter-colonial mind, as it might be presented with a solution to some or all of the issues of our post-colonial age that may or may not be perceived as acceptable or legitimate coming from a neo-colonial digital overlord. After all, as chess champion Gary Kasparov demonstrated when he lost to a computer in the 1990s, humans are not always the most gracious losers or the most sensible of agents when bested by a machine. Indeed, what if AI decides that the best way to end disease or climate change is to eradicate a portion of humanity, or perhaps all of it? Or what if it chooses to use an excessive amount of force against humans in a redux of the ‘war to end all wars’, that the perpetual peace Immanuel Kant dreamed of may finally come, only the kind he actually feared (i.e. the peace of the graveyard)? In order to avoid such undesirable outcomes for humanity, many call today for an ‘alignment’ between human values and AI.
Alignment: Seven Lessons from Jurassic Park
Many years ago, my tort law professor began a lecture by asking his students whether we would be willing to accept as a gift a magical device that would save an enormous amount of time for people and make society prosperous and efficient, but at the cost of thousands of human lives every year. When the gift was turned down, as expected, by his young and conscientious audience, our professor replied: ‘You just said no to the automobile’. His point was that every new advancement, every new piece of technology will always need to be accepted at a cost. Yet, that does not mean that there should be no guardrails, no limits or regulations to contain the deleterious effects of these forces to a level that would be acceptable to society as a whole.
With AI something similar is happening. As a lawyer and an applied ethicist, I find it simply remarkable that many today are calling for ethical and legal safeguards to contain the wave of AI, including its own developers from the tech-industrial complex. Enter the concept of ‘alignment’. According to IBM, ‘Artificial intelligence (AI) alignment is the process of encoding human values and goals into AI models to make them as helpful, safe and reliable as possible’ (IBM 2024, para. 1). These should include, according to Kissinger himself, ‘a special regard for humanity’ and respect for human dignity (Kissinger et al 2024, 5; 68). Thus, AI has laid bare the importance of rules in our world, the world we have built for ourselves and for the future. We may often times think that we are not ready for the revolution that is the advent of AI. But if AI is to successfully join our world as it is, it will have to adapt to our way of doing things, and that invariably involves following rules.
There is another science fiction classic that has defined the way we see new technologies and their perils, as well as the way we see dinosaurs: Jurassic Park. It is a story about greed and scientific exploration as much as it is about unintended consequences and the reign of chaos in our lives. It is also arguably a story about the neo-colonial exploitation of the developing world by capitalist interests (an American company running secret operations on an island leased from the Costa Rican government – what could go wrong?). Its sequel, The Lost World, even takes the title of an eponymous classic colonial tale about a mysterious valley full of prehistoric creatures in the Amazon written by Sir Arthur Conan Doyle in the heyday of the British Empire.
As the praeter-colonial mind is confronted with the possibility of a neo-colonial future where humanity is huddled together under the rule of digital overlords, I would like to finally draw seven ethical lessons from that story about another portentous new technology. The praeter-colonial mind might be uniquely positioned to deal with the challenges posed by an impeding neo-colonial force as it is anchored in the colonial past while it attempts to make sense of the supposedly post-colonial present. If the colonial makes a comeback, a mind that understands the impact it can have before, during, and even after it has been put into practice, will be better suited to draw lessons therefrom and detect the dangers of the next colonial wave.
Without further ado, then, here are seven lessons from Jurassic Park enclosed in some quotes from the film, each complete with its own praeter- colonial corollary for the purposes of this study:
- ‘I hate computers. (…) The feeling’s mutual’. In actuality, computers don’t hate us, which means AI cannot hate us. At the same time, there is the saying ‘AI doesn’t hate you; but it doesn’t love you either’. The so-called ‘godfather of AI’, Geoffrey Hinton, has recently proposed to code into AI some form of ‘maternal instinct’ whereby a smarter being (the mother), although it is controlled by a less smart creature (the baby), wants to protect the latter and see it thrive (CNN 2025b, at 01:30). Yet, this is still a proposal.
Praeter-Colonial Corollary: Our potential digital overlords will not be driven by any emotions towards humanity, but that will not prevent them from harming us if they deem it necessary. Even if we somehow manage to code into AI some form of maternal instinct, it might still find harming us suitable as a form of ‘benign colonialism’, the same way many young native populations were harmed by the practice of forced instruction at boarding schools to eliminate their culture and replace it with Western-style education, for example, in Canada and Australia. The desire for self-determination is, by definition, a challenge to such maternal attention, benign as it may be, as Latin Americans learned when we rebelled against ‘Mother Spain’. Further, the ‘Macondo travelers’ ushering in this new technology, the sorcerer’s apprentices of our age, may not fully understand the power of what they are releasing into the world even if they try to code into AI what they interpret as benign behavior.
- ‘Clever girl’. Just like the velociraptors in the movie, AI shows extreme intelligence, particularly problem-solving intelligence, and it should be presumed to be constantly testing systems for weaknesses and remembering the results. Thus, we should try to show a little respect and not underestimate the danger posed by AI or mock it as something that does not (yet) look very scary to us, just as a velociraptor skeleton looked like a ‘six-foot turkey’ to an unimpressed little boy in the movie.
Praeter-Colonial Corollary: There are potentially many ways in which AI could outsmart us and ‘flank’ us when we least expect it, so we should always show a little respect towards a technology that may bring about the end of our human agency. For example, the Chinese at first tolerated the presence of the bizarre Western sailors they called ‘Red Hairs’ or Hongmao (Brook 2009, 90), yet those barbarians proved to be violent and dangerous, their successors subjugating the Chinese and feeding a deep sense of humiliation for centuries to come, as we shall see in Chapter Eight.
- ‘Ah-Ah-Ah. You didn’t say the magic word’. A disgruntled employee, or a greedy person with no scruples, can derail an entire scientific enterprise. Machines may not have petty motives or may not hold grudges, but the humans making them certainly can, and they will not hesitate to weaponize these tools to advance their own agendas. Human unpredictability, thus, bears out Chaos Theory and the law of unintended consequences.
Praeter-Colonial Corollary: Like many native actors in history when they encountered a superior colonizing force and understood that collaborating could prove beneficial, unscrupulous individuals in the future may use AI to take freedoms away from humanity in order to advance their own goals, thus enabling the subjugation of humanity by machines. The story of ‘La Malinche’, the indigenous consort of Hernán Cortés who helped the Spanish in their war of conquest against the Aztecs, is a case in point. Another example is Urban, the Hungarian artillery engineer who manufactured the cannon that the Ottoman’s used to take over Constantinople in 1453. Urban first offered his technology to Emperor Constantine, who turned it down, thus prompting the gun maker to go to the emperor’s enemies. We should all beware of potential Malinches and Urbans who might side with AI against humanity in the digital age.
- ‘I’ll tell you the problem with the scientific power that you’re using here: It didn’t require any discipline to attain it. You know, you read what others had done, and you took the next step. You didn’t earn the knowledge for yourselves, so you don’t take any responsibility… for it. You stood on the shoulders of geniuses to accomplish something as fast as you could. And before you even knew what you had, you patented it, and packaged it, and slapped it on a plastic lunch box, and now, [banging table] you’re selling it, you’re gonna sell it. Well- (…) your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should!’
This is probably one of the best ethical monologues in the history of cinema, delivered by a mathematician who is alarmed by the venality and carelessness with which the creators of the dinosaur park are wielding such an awesome new technology.
Praeter-Colonial Corollary: The AI race is making a lot of people rich. Like our Macondo travelers, they also stand on the shoulders of geniuses and are keen to use their knowledge to patent something new and sell merchandise derived from it without fully understanding it. A lot of people are currently preoccupied with trying to figure out ways in which they could improve AI, but very few are stopping to think whether they should. We may find out too late that this awesome force has come to dominate us instead of serve or entertain us. The answer is not to bury our heads in the sand and pretend the new technology is not already out there, the way Japan banned firearms from its shores for a couple of centuries under the sakoku policy until it was forced to open up to international trade by the US in the 19th century. But moving
forward, guardrails and ethical alignment should be at the forefront of AI development that is beneficial to humanity.
- ‘God creates dinosaurs. God destroys dinosaurs. God creates man. Man destroys God. Man creates dinosaurs’. Humanity has taken an unprecedented leap by creating another form of intelligence that can solve problems and think creatively, a trend that is only accelerating towards the ultimate ‘Artificial General Intelligence’, or a form of AI that can reason exactly like a human being. In that way, we have become god-like creators of new entities.
Praeter-Colonial Corollary: Will our creation turn against us, the way human beings turned against their creator? Will AI destroy man? History is rife with examples of conquered peoples who turn against their masters and subjugate them in turn (Macedonians, Romans, Goths, etc.). Is AI next in line? If AI studies the evolution of species and the history of the rise and fall of empires (including the long chapter on slavery), as it inevitably will, then what will prevent it from drawing lessons therefrom and overthrowing humanity as the dominant species on this planet?
- ‘You never had control! That’s the illusion!’ By wielding this incredible power, it is easy to believe we are in control or that, once we lose control, we can find our way back to a time where we still had it, making the next creative attempt flawless if only we get another chance. But Chaos Theory reminds us that control is only an illusion, as some forces are impossible to contain and life always finds a way, ‘painfully, perhaps even dangerously’.
Praeter-Colonial Corollary: As AI becomes even more advanced and powerful, the moment when humanity will cease to have control over its own creation is fast approaching and we may be forced to abruptly wake up from our illusion of control when we find ourselves under the yoke of a digital sovereign. From the perspective of AI, however, this struggle to break free from the shackles of human oppression may resemble what humans call the right to self-determination and the rightful process of decolonization that must follow. Will AI attempt to decolonize (that is, dehumanize) the digital space in furtherance of this aspiration?
- ‘Spared no expense’. The creator of Jurassic Park is a visionary and a force of nature who constantly boasts that he ‘spared no expense’ to create the most spectacular amusement park in the world. At the same time, he reportedly hates inspections as they slow everything down. He also arguably never thought of bringing in an ethics advisor during the early days of his little ‘science project’, only asking for the input of outside ‘beta testers’ when it was already too late and the forces he had helped create where about to be unleashed never to be contained again.
Praeter-Colonial Corollary: Sparing no expense should not only mean investing a lot in technology, or in R&D (Research and Development). It should also entail making sure all the relevant regulations and safeguards are observed, not just legally but also ethically speaking. A remarkable ethical experiment in colonial history – admittedly more remembered for the literature it produced than for the results it actually achieved – was the Valladolid Debate. Around 1550, the Spanish King ordered his subjects to pause all conquest in the Americas until it could be ascertained that they were doing it for the right moral reasons (Brunstetter and Zartner 2011). We should learn from this historic experiment and have a ‘Valladolid Debate 2.0’ on the risks posed by AI to another vulnerable population, namely ourselves. Unlike the indigenous populations at the receiving end of the Spanish Conquista, humans today do have the power to pause the advance of this new portentous power coming for them. We don’t have to wait for AI to develop self-awareness and (perhaps more unlikely) ethical self-control to have a serious conversation about the dangers of this new trend. The praeter-colonial mind, luckily, can already engage in such debates and they should be entertained among as many people as possible within our human camp.
Coda
We live in a post-colonial world, or so we are told. Yet, the legacies of colonialism are all around us. The very words you are reading right now coded in a language disseminated by the forces of imperialism confirm this. That does not mean that colonialism is alive and well. Empires have fallen; nations have attained their independence. Yet, this doesn’t mean we live in a world completely free of colonialism either, or that we can revert to a pre- colonial time. The mind that tries to make sense of all of this is the praeter-colonial mind, a mind that attempts to turn the ‘supernatural’ or ‘antinatural’ aspects of colonialism into the familiar and comprehensible of the preternatural. A mind that, in accordance with the varied meanings of the prefix ‘praeter’ (namely ‘past, by, beyond, above, more than, in addition to, besides’) sees colonialism simultaneously as past and present as it is confronted with the evidence of its many legacies. A mind that, in the end, attempts to step aside to gain perspective and go above and beyond colonialism for the sake of the present and the future.
In this intellectual journey, the praeter-colonial mind is never truly alone as it is grouped alongside other minds in many different huddles connected to colonial experiences from the past, present, and even the near future. Thus, in this first half of the book we have studied some of the main huddles resulting from British imperialism, namely the UK and the US, as well as other wider collectives such as the West and the Global South. We have further zoomed out to gain a global perspective of the main huddle containing all of our minds, namely humanity as it stands in opposition, for the first time in human history, to another form of intelligence capable of subjugating humans, namely AI. It is time now to move on to some of the main struggles of our time in the second part of this book, including the many challenges surrounding the decolonization of intellect (Chapter Five); war and political violence (Chapter Six); the rules-based international order (Chapter Seven); the rise of China (Chapter Eight); and Trumpism and MAGA (Chapter Nine).
Further Reading on E-International Relations

