Skip to main content

S1E3 – During this episode we interview Dr. Jean-Marc Rickli, Head of Global Risk and Resilience at the Geneva Centre for Security Policy (GCSP) in Geneva, Switzerland. Among other questions, we ask which technologies the security sector is already embracing and which are on the horizon that have the potential to disrupt the industry. With serious concerns about the way facial recognition technology is being used by security services, Dr. Rickli considers whether technology is a force for good or bad, and what mechanisms can be put in place to effectively govern the use of technology across the sector.


This podcast was originally published June 25, 2020

I am pleased to introduce the third episode in ICoCA’s new podcast series, Future Security Trends: Implications for Human Rights. Today, I’ll be in conversation with Jean-Marc Rickli, a world expert on technology and security, to discuss the future of technology and security. A force for good or bad. Jean-marc is head of Global Risk and Resilience at the Geneva Centre for Security Policy, also known as GCSP. Prior to this, he was assistant professor at King’s College London in the Department of Defense Studies. So Jean-Marc, could you first tell us a bit about yourself and the work of the Geneva Centre for Security Policy?

Thank you for having me on this on this podcast. The Geneva Centre for Security Policy is an international non-profit foundation physically based in Geneva, Switzerland. We are comprised of 53 member states, of which all the five permanent members of the UN Security Council. Our mission is to promote international peace and security, to prepare and transform individuals and organizations so they can create a safer world. And we do that through different tracks. We devote a lot of time through executive education, research, public discussion, and we also have a fellowship program for executives in transition. So, we are a bit of a hybrid organization. We are not an academic institution as such. We are not a think tank as such, but we are a bit of the two. As for me, I’m in charge of global risk and resilience, so I mainly focus on the transformation of warfare and how technology and emerging technologies are impacting warfare. But I’m also surveying what happened in the world in geopolitics, and prior to that I was based for five years in the Middle East, in the Gulf, where I was teaching, as you mentioned, for King’s College, London, in Qatar. And before that I was based at Khalifa University in in the UAE. But my work nowadays is devoted to really trying to spot weak signals in terms of technology developments and how they will affect the way people are using violence and which war.

Well, thank you so much for that, Jean-Marc. Now, you gave a thought provoking and sometimes frightening talk last November at the first workshop that ICoCA convened on Future Security Trends, during which you illustrated how technology is experiencing exponential growth. The world’s first-hand experience living with the consequences of this phenomenon now, the phenomenon of exponential growth with the spread of COVID-19. But could you describe how this concept applies to the technology sector and whether we should be equally afraid of it?

As you rightly mentioned over the last three to four months, we really did experience what an exponential growth is all about. And most people are not used to this because we are wired to think in linear terms. And so just to explain again what exponentiality is all about, if you take a simple rule like every something doubles at every iteration. So, after five iterations you will be at 32, but after ten iteration you will be at 1024. And so you are 32. After five iterations will be a mere 3% of the final outcome after ten iterations. But then if you continue this after 50 iterations, you will be in order of magnitude of 1.1 to the power of 50. So, what it illustrates is that exponentiality is characterized by the fact that for some time you see nothing and suddenly you have an explosion. And from there it moves really, really fast. So, what we see in terms of technology, we see some characteristics, especially in the digital domain, of exponential growth. You might be aware of the so-called slower, which tells you that the computing power is doubling. First it was every two years and now it’s down to every 18 months. So, if you plot all CPUs, so central processing unit. So, what makes your computer basically work and calculate, if you plot that on a graph on a logarithmic graph, you’ll see that you have a linear relation of all the CPUs have been developed since the 70s. That means that the relationship is quite strong. A few months ago, a new organization called OpenAI compared development in CPUs and development in algorithmic compute, and what they realized was that from 2012 to 2018, the doubling period went down to three and a half months, which means that, if you take the slower, doubling period every 18 months from 2012 to 2018, a computer in 2018 would have been 12 times more powerful than in 2012. But with the growth of artificial intelligence, machine learning, and algorithmic compute, this number growth to 300,000 times. Meaning that. You are 3 to 100,000 times more powerful in 2018 the algorithm in terms of computing power than in 2012. So, this is an illustration of how fast technology is growing.

Obviously, there are some limitations to that, and this limitation has to do with in order to get to this result, you need an extensive, almost similar growth in energy consumption. And there are some physical limitations to how much we can continue to grow like this. And it’s not just in artificial intelligence or computing that we see such developments. If you take genomics, for instance, the first time we sequence the human genome we started in 1990, ended in 2003, it took 13 years and it costed $2.7 billion. Nowadays you have companies that are offering sequencing your DNA for $1,000 in one day, and some companies are working for sequencing in one hour for $100. So here what you see is that the technology is growing very rapidly. It is costly to implement. But once the technology has been developed, prices and are dropping very quickly, which means that more people can afford it and use it. So basically, this is why people are talking about this era in terms of exponential era, because we see this pattern of exponential growth developing in several technological fields. It doesn’t mean that we will grow forever, but for now, what we can see is that in your own lifetime you can see dramatic changes that requires you to challenge and to change sometimes your assumptions and your worldview.

Thank you. Now, there are obviously a range of technologies that are experiencing this growth now, but which technologies is the security sector already embracing and how is this changing the way that private security companies go about their business?

So, what you have to understand is that current technological developments in emerging technologies are driven by the private sector. Unlike what we had, for instance, during Second World War, where United States decided to develop nuclear weapons, it gathered scientists in a desert and basically this guy in isolation developed the bomb. And then if you look at the proliferation of nuclear technology, even though it proliferated, still it was quite limited and contained emerging technology. These days are no longer developed primarily by state. Doesn’t mean that states are not using them. But the push factors are coming from the private sector, which means that these technologies are available. Once they have been developed, they are available. The security industry is not immune. If you want to using this, this, these technologies. And so, you have a broad spectrum of potential uses. You have, for instance, 3D printing that revolutionize logistics. You have augmented reality and virtual reality that allows complete paradigm changes in training. You have obviously artificial intelligence, especially for now in the field of analytics and big data, big data are available and so companies that are able to use these data and to extract meaning, especially through artificial intelligence, could have a tremendous impact in terms of surveillance, in terms of analysis, obviously cyber security and the combination of cyber security and artificial intelligence, this is emerging. This is a field where lots of developments are needed to be done. But automating processes of detection is increasingly available and possible with AI, the use of blockchain for anything that deals with authentification of data as well as probably one of the technologies that is most used are drones and development in drone technology has really exploded over the last 10 to 15 years and drones are being used for surveillance purposes. But the future of drones will increasingly be in the use in swarms. So, using multiple drones in a way that you can see collective intelligence emerging, having a set of drones behaving as a single entity if you want.

Then anything that deals with the Internet of Things, all the connected devices that could be used and sensor that could be used for surveillance, but that can also be used to extract meaning and information. But it’s not just technology that is evolving. It’s also new business models, in a sense these technologies allow maybe non-traditional actors that were maybe not in security to be involved in the security industries. If you look, for instance, at companies that deals with big data, once you, for instance, know how to extract data for mapping a specific ecosystem, it could be in in the industry if you want to extract meaning in terms of what the solar industry is doing, nothing prevents you to use the similar algorithm and apply them for a different issues that deals with security.

Does that mean that the security industry itself is is really an innovator and driver of technology itself? And what does that mean for the industry? Is it diversifying?

Well, you would have to define what the security industry is all about. You have a few companies that have been, from the very beginning, security important innovator. If you take, for instance, Palantir in the United States, which uses big data analytics and artificial intelligence to extract meaning, yes, that such companies are innovating in the field. But what you see also are that other companies that are not specifically devoted to security could develop technology that could be used by security companies or end up also doing security work. If you if you look at Facebook, for instance, its DNA is absolutely not working in security, Facebook has a counter anti-terrorist unit because, ISIS was actually the first organization that understood how to weaponize social media. And they did that by combining the user of ultraviolence with the virality of social media. And then when ISIS and other terrorist organizations use social media as a force multiplier, the companies, the social media companies, Twitter, Facebook and others had to react to that and so had to invest in that kind of capacity. So, but it’s not specific to the security industries. We are living an era where disruption can actually come not from your competitor that you see on the horizon, but by a company that maybe is not working in your field at all, but has developed a technology that could easily repurpose it and scale. And this is also a characteristic and a consequence of exponential growth. The scalability of this technology is phenomenal. And the way it can be repurposed or used in different datasets is also very easy.

So what technologies are on the horizon? Perhaps not yet adopted, but that have the potential to disrupt the private security sector in the future? And how is this likely to impact the industry?

So, you have new technologies that are developed that are on the horizon, but also combination of existing technologies that could have a disruptive impact. So, in terms of new technologies, you have neuromorphic computing, for instance, that basically mimic the neurobiological architecture that is present in your nervous system. And so, the advantage of neuromorphic computing is that it’s much faster than CPUs that I mentioned earlier. And so they could also be used much closer to the device. They are not relying on the cloud and that is what we call edge computing. With developments in IoT, Internet of Things, what you need is a very strong bandwidth to communicate between your device and the cloud. So instead of using that model, you can actually store information where the device is. That you will increase if you want the speed and the efficiency of your processes. Neuromorphic computing could be used in drones. And there are some experimental reconducted where. Drones equipped with neuromorphic computing can react way faster than the drones that we see they could react as. The way you would basically pilot these drones. And if you add to that autonomy, that opens the gate for drones that could be used in a very different way and that could have also could be used for offensive purposes because they would be so reactive that they could maybe evade defenses.

Natural language processing is also not new, but advances, improvements have been made. So, the ability to process and analyze large amounts of natural language data and so most machine learning algorithms are extracting meaning from pictures, but extracting meaning from language is much more difficult because there is a need for to comprehend the meaning of a sentence.

But as I mentioned, also the combination of emerging tech will have a tremendous impact. The combination of AI and neurosciences, AI and big Data, but not just digital data, but increasingly we will have biological data, brain data that are available. So, if you are able to combine these different data and extract meaning out of it, you’ll get a very strong advantage over your competitor. The field of robotics is very important. Autonomous robots. You might have seen, for instance, that you may know this company called Boston Dynamics. Lots of videos of these robots jumping on YouTube. And during this crisis, COVID-19 crisis. Singapore used one of their robots, called Spot, to warn people in parks about social distancing. Therefore, this robot is like a dog if you want, and is able to work like a dog and would go in a park and remind people of social distancing. But the robot could also be used to screen patients or disinfect certain places. So here the growth in robotics has been outstanding over the last ten years and the combination of robots with artificial intelligence will be probably disruptive.

But I would also mention that it’s not always emerging tech that has a capacity to be disruptive. Especially in the field of security, we also have to pay attention about the combination of legacy technology of low tech. And we had a clear example of the dramatic impact this could have with IED improvised explosive devices that were the combination of simple mobile phones with explosive can and nails, and you could detonate a bomb at a distance. So, when we talk about technology, we shouldn’t just focus on these high-tech emerging technologies, but also the way legacy technology could be improved and enhanced by simple means, simple technological development as, for instance, the mobile phones and the combination with explosive demonstrated.

Now, IBM’s CEO earlier this week told members of the US Congress that the company would no longer offer facial recognition technology, citing the potential for racial profiling and human rights abuse. And this was followed on Wednesday, this week by Amazon, who said they’d be implementing a one-year moratorium on police use of its recognition technology. But it was still allowed organizations focusing on stopping human trafficking to continue to use the technology. So, we’ve got a technology here that can be used on the one hand, to protect human rights and on the other to perpetrate human rights. How do we reconcile these two things?

That’s a very important point. And that is maybe a point that has been overlooked from the very beginning. As I mentioned earlier, these technologies, the main driver is the private sector. A company does not build the technology, most companies do not build technology with security in mind. And so, if you take, for instance, your software, you constantly receive updates for your software because they were not built with the security mindset. This is the first problem. The second problem is the repurposing of technology. Technology could be used with good intention, but then it’s sometimes very easy to repurpose the technology and have negative implications. So, if you take facial recognition, it could be used in a very good way when it comes to identifying specific people in crime or for terrorism. The problem is that the data that have been used to train the algorithm are inherently biased. And it’s not that those trained these algorithms chose data that were biased, but there is a inherent bias in the data. There are some characteristics that are overrepresented. Even if you take any all the pictures you could find on Internet, there are inherent bias where some racial profile is overrepresented or under-represented. And so, what we’ve seen with facial recognition is that when it comes to Africans, for instance, they were really bad with a lot of false positives. So, a lot of disparities when it comes to racial, race and gender. So, following the event, the George Floyd event in the US, Amazon and IBM decided to put on hold facial recognition technology.

But it’s not just these companies that develop this technology that raised some issue. At least they recognized there was a problem. A much smaller company called Clearview AI developed a facial recognition software that basically run its algorithm against a database from pictures that have been taken from social media and other websites on the Internet. It is claimed that Clearview AI has a database of 3 billion images. To give you just a comparison, the FBI database has more than 400 million. But this start-up company that was created three years ago is using a dataset of 3 billion images. And what happens with this, the way it is used is that you basically snap a picture of someone in the street and then you run this picture in with the application, and then the application will identify the person and will give you information about this person. And so, this application is marketed as being of use for law enforcement authorities. And as of now, more than 2,000 law enforcement agencies in 27 countries in the world are using it. But it’s not just law enforcement agencies, you have also wealthy private individuals that are using it.

This application raises strong ethical issues in terms of privacy, because if anyone can actually steal a picture by snapping a picture of you and then run it in a database and extract meaning about who you are and where you live, you can see how which kind of security implications this could have. So, in all of this technological development we should be more careful in the way these technologies are being implemented. It’s very difficult, if you take, for instance, two technologies that have been used for good purposes. Cryptography that enhance security on the Internet. As well as blockchain that basically led to development in cryptocurrencies. And the blockchain allows to eliminate intermediaries and so it provides a more cost efficient for the users. When you combine crypto blockchain with cryptography. You end up a possible use in ransomware and ransomware basically are malware that will block your access to your computer until you are paying to unlock your computer. And that has been made possible because through cryptography you are able to block someone else’s computer and is enabled to break the code and then the criminal can be paid and not be traced because through blockchain it’s impossible to trace where the money is going through in cryptocurrencies. So here you see that technologies that were developed with good intentions could be misused and repurposed. And so it’s impossible to ask researchers or developers to think about all possible implications.

So, that’s why we need to put in place governance, infrastructure and structures that can deal with that. But it’s very difficult because as I mentioned, these technologies are not primarily developed by states, but by the private sectors, and they are also characterized by the very high speed of proliferation. So once a technology has been developed and is in the digital domain. It’s almost impossible to stop its proliferation. An example of this, for instance, is deepfakes. The technology of deepfakes relies on generative adversarial networks, which are two algorithms that are pitted against each other in a zero-sum game scenario where one has to develop new fake pictures and the others has to recognize that they are fake. This technology was developed in 2014 and the first application in terms of deepfake videos were published in the end of 2017. And since then, the use of deepfake, especially in the porn industry, has just exploded. And so here you can see again how fast the technology is growing. And the problem is that governance is very slow. To adopt legal or laws to constrain development of a certain technology is time consuming. And most of the time it comes when harm has already been done. And so here you have an asymmetry between the speed of this development and the way the international community can cope with it.

This is fascinating. And I do want to get on to the kind of governance and oversight mechanisms in a moment. But are you therefore saying that technology is really agnostic? It depends on how it’s used, whether for good or bad. But the technology itself is really agnostic.

Well, that’s a big debate. I tend to think that technology, with a few exceptions, is not neither good nor bad. It’s the way you use it that matters. You could maybe make a point for deepfake, for instance. So, the idea of you take someone’s pictures and then you merge it on a different digital support, I don’t see for now how you could use that in a positive way. You could maybe think about training purposes but from the very beginning, those who use that used it in rather malicious way. But again, most technologies are not good or bad. Facial recognition algorithm, the same algorithm could be used, for instance, to identify tumors in MRI pictures of or x rays, and so they could have very positive implications. But the same algorithm, when they are used to do racial profiling, then have negative consequences. So that’s why governance and governance structure are very important, because you have to define the boundaries on how you can use these different technologies and for technologies that are really dangerous and that could have a tremendous negative impact. Maybe the scientific community, the academic community should rethink the sacrosanct principle of open source publication. Maybe not all information are good for access to anyone. And I think here, especially in the field of synthetic biology when you start developing mechanism techniques that can synthetically engineer very deadly viruses, we have to be very careful in the way we deal with the information and who have access to that kind of information.

So finally, let’s just focus a little bit more on the governance mechanisms then and particularly looking at the private security sector, which is ICoCA’s domain. What existing mechanisms are in place to mitigate risks of these technologies being harnessed in a way that could lead to human rights abuses? And are these effective, if there are any? And if not, what can be done about this?

So, you have the traditional international legal frameworks on existing technology, weapons of mass destruction and others. So, as I mentioned earlier, these technologies are rather in the hands of states and I would say that it’s way easier to deal with that. But already there if you take, for instance, chemical weapons, we have seen over the last few years, that there have been constant violations in terms of the use of chemical weapons. And if you look at ISIS, it used chemical weapons on more than 70 occasions during its reign in Syria and Iraq. Now, when it comes to new emerging technologies, the problem that we face is that technology developed much faster than governance. And when states try to tackle these issues, for instance, in the field of lethal autonomous weapon systems, so the ability to develop weapons that are able to select targets on their own and basically go about their objective on their own and maybe reprioritize their objectives, these weapons do not yet exist, but they have understood the negative potential impact that these weapons could have and developed a framework, a discussion at the UN through the Convention on Certain Conventional Weapons (CCW). They set up an expert group, a governmental group of experts on lethal weapon systems. And this group has been discussing whether we should issue a ban on this weapon or not. And this group has been going on for the last six years and come up with 11 principles, but what was really visible here is that states have very different perspectives on whether we should ban or not, we should limit the uses or not. In the end, you end up with very general principle and that are still being discussed, and even though we end up with a treaty, that is very unlikely, this will take time. And in the meantime, technology evolves very rapidly.

The problem here is that it’s an issue of speed. Another one is the problem of bringing together the relevant actors and the UN is an inter-state organization, whereas the people you should bring into such debates are not just states, but the scientific community and the private sectors. And so here we need to rethink governance framework where the private sector is much more integrated in that kind of discussion. The problem is that the private sector and the scientific community are very often reluctant to talk about the security and the military uses, the potential uses, of their technology. We can see that, for instance, with Google or Amazon, you might remember, Project Maven in the United States. That was a collaboration between Google and the US Department of Defense to automatize real life video feeds from drone to lessen the burden on the operators so the algorithm would identify prey, identify anything that pops on the screen of the operator by labeling what it is. And then that when it was made public, that led to an outcry in the US and also within Google. Google then had to stop the contract. And so, what this affair reveals was that when it comes to security and military uses of these technology, these companies are reluctant to be engaged in such discussion because this has an impact on their image. But they still do cooperate and develop technologies that could be used for security and military purposes. So, the point here is to say that we need to develop governance structures that take into account all the actors involved, something which is very difficult. And the problem is that most of the time it’s always after the technology has been developed and once the technology has been developed and is open in the public it’s very easy for people to use the technology in ways that were not intended by the creator.

Well, this has been a fascinating discussion and much to think about, especially for a multi-stakeholder initiative like ICoCA that does bring the private sector together with governments and civil society. Hopefully there is a role for us here and hopefully with inputs such as your own, we will not be playing catch up all the time, but maybe try and also get ahead of this issue. But for today. Thanks so much Jean-Marc, really appreciate it.



The views and opinions presented in this article belong solely to the author(s) and do not necessarily represent the stance of the International Code of Conduct Association (ICoCA).