“Espero que o futuro da diplomacia digital seja menos ingênuo em relação à tecnologia digital”

Katharina E. Höne conversou com os editores da CEBRI-Revista

Katharina E. Höne researches, writes, and teaches at the intersection of international relations and digital technology. Until July 2023, she was Director of Research at DiploFoundation (, a Swiss-Maltese non-governmental organization that specializes in capacity development in the field of Internet governance and digital policy.

Her areas of interest and expertise include the impact of digital technology on international relations and diplomatic practices; the ethical and equitable (global) governance of artificial intelligence and its role as a topic and tool of foreign policy; and science diplomacy in the context of emerging digital technologies. She has given presentations, conducted trainings, and undertaken research for the African Union, the European External Action Service, the foreign ministries of Finland, Namibia, and South Africa, the Swiss Federal Department of Foreign Affairs, and the Swiss Agency for Development and Cooperation.

Katharina holds a PhD from the Department of International Politics at Aberystwyth University (UK) and an MA in diplomatic studies from the University of Leicester (UK).

The following is the interview given to CEBRI-Journal in September 2023.

Artificial intelligence is at the center of many discussions regarding the impact of technology on world politics. There is great potential for economic growth and productivity but also risks that must be addressed. What is your take on AI and international relations, broadly speaking?

Katharina Höne: There is a three-part typology, which offers a very broad orientation for everyone who wants to begin thinking about AI and diplomacy. It was introduced by Jovan Kurbalija at DiploFoundation to think about the relation between diplomacy and (digital) technology. The three broad categories are: AI as a tool for diplomacy; AI as a topic of diplomacy; and AI as something that shifts the (geopolitical) environment in which diplomacy is practiced. For example, AI tools for diplomacy might include chatbots for consular affairs or the automated analysis of satellite images in humanitarian crisis response. Most importantly, various tools that can support negotiators have also been discussed and trialed, for example, by the United Nations (UN) Department of Political and Peacebuilding Affairs (DPPA) Innovation Cell and by DiploFoundation. Diplomats also encounter AI as a topic in various negotiations and discussion fora. The work of the Global Partnership on AI (GPAI) comes to mind, UNESCO’s Recommendation on the Ethics of AI, and the work of the Group of Governmental Experts on Lethal Autonomous Weapons Systems. Let’s also not forget that the UN Security Council recently had a debate on AI. But beyond AI as a tool and topic of diplomacy, we also need to think of AI as a geopolitical factor. The so-called AI arms race between the U.S. and China is a good example of the potential geopolitical shifts that AI tools could trigger. But beyond the big systems conflict that the “AI arms race” seems to suggest, AI might also widen the digital divide and create a greater gap between those who have the resources to participate and benefit and those who don’t. This three-part typology works extremely well as a first orientation – for practitioners and for scholars alike. Having said this, the categories are very much related in practice.

The so-called AI arms race between the U.S. and China is a good example of the potential geopolitical shifts that AI tools could trigger. But beyond the big systems conflict that the “AI arms race” seems to suggest, AI might also widen the digital divide and create a greater gap between those who have the resources to participate and benefit and those who don’t.

Your question also mentions the opportunities and the risks associated with AI tools. Let’s start from a basic assumption: Any tool can be used for a “good” purpose or it can be used for a “bad” purpose. For example, a hammer can be used to build something or to destroy something. Depending on where you are standing, one of these acts is a positive one, the other is not. To give another example, social media posts can unite people by promoting understanding and fostering a sense of community. They can also divide people by amplifying stereotypes and hate speech. The historian of technology Melvin Kranzberg famously said that “technology is neither good nor bad; nor is it neutral.” I find this quote so important because it reminds us that the technology itself is not neutral. Many decisions go into each step of building and deploying an AI tool. Some of these decisions have far-reaching consequences and political and societal implications. This is where discussions about opportunities and risk need to start. This is also the place where people and institutions need to take responsibility in their respective capacities.

...the technology itself is not neutral. Many decisions go into each step of building and deploying an AI tool. Some of these decisions have far-reaching consequences and political and societal implications.

On AI governance at the global level, there have been talks about the need for an international agency to bring countries together in order to address current concerns and future challenges. In your view, will States be able to build consensus to overcome their differences and ensure that AI technologies will be used in a safe and trustworthy manner?

KH: Let me start by looking at the idea of consensus. Geoff Berridge, who I was lucky to have as a professor of diplomacy, always reminded his students that consensus is not the same as unanimity. In order to have consensus, not everyone needs to explicitly agree, it is enough that no one raises any objections within a given timeframe. If we keep that in mind, a global consensus on general principles on AI is very much possible. In fact, this is exactly where the work of the UN Tech Envoy, Amandeep Singh Gill, is heading. This year, for example, he held a multi-stakeholder consultation process on AI governance, which I participated in as part of a group brought together by the Future of Life Institute. Being part of this small piece of the process really illustrated the challenges of consensus for me. The efforts of the UN Tech Envoy will culminate in the Global Digital Compact (GDC), which will be agreed at the Summit for the Future in September 2024. The GDC will present a global consensus on AI. Another example of a global consensus on AI is UNESCO’s 2021 Recommendation on the Ethics of AI, which was adopted by member States. In other words, a global consensus on AI is on the way. Let’s be clear, consensus favors the lowest common denominator – especially when almost 200 member States and many more stakeholder voices are involved. It is the current best option to have a starting point for the global governance of AI. However, it is just that: a starting point. 

Beyond that, it is clear that we need a global space for discussion on AI that is open to all. Currently, there is quite some fragmentation between States, or rather between groups of States. Some drawbridges are being raised, leaving a chasm where a conversation should have been. Further, the fragmented way in which AI is regulated and policies are developed in different countries is a challenge – take for example the way different countries reacted to the release of ChatGPT. Given these points and the potentially devastating and far-reaching consequences of some AI-applications, various actors, including the UN Secretary General, have suggested the creation of an International Agency for AI. For me, this raises three main questions. First, doubling of efforts: what about the existing efforts of international organizations such as the International Telecommunications Union, UNESCO, and others? How could we meaningfully define the relation between these organizations and a new International AI Agency? Second, is it useful to talk about AI in general or would we have to narrow down the scope of such an agency to specific applications – for example to the impact of AI on peace and security? Third, an agency that is not backed by binding international law will remain toothless. Given the seriousness of the situation, I don’t think that another advisory body that issues recommendations is enough. An International Agency for AI that acts as the secretariat for a legally binding International AI Convention would be a useful start. 

Many Ministries of Foreign Affairs have actively been using digital tools to promote their foreign policy goals, including on social media. How do you see digital diplomacy evolving over the next few years?

KH: I’m not sure how digital diplomacy will evolve over the next few years, but I can tell you how I hope digital diplomacy evolves over the next years. But, in order to look ahead, we also need to look back. After all, the past is the ancestor of the future. Looking at the 2000s and early 2010s, two main tendencies stand out: first, there was great optimism that social media could change people’s lives for the better. For example, social media was thought to be a great source of support for the protestors of the Arab Spring who sought societal change and greater freedom; second, within Ministries of Foreign Affairs, there was this sense of being behind, of needing to be on social media in order to participate in the conversation and communicate about their work. Some countries were at the forefront of using social media, the U.S. and the UK come to mind. Others were trying to find their own way of engaging with this new way of communicating. However, things shifted in the mid-2010s with the so-called tech-lash. I define the tech-lash as the realization that big tech companies have amassed a lot of power and the realization of the increasing negative impact of social media on individuals, societies, and democracies. 

Conversations about the rules and assumptions, in short, the algorithms that guide the behavior and usability of these tools, have taken place quite late – only after the initial hype had calmed down and the tech-lash was here. But why do we have these conversations so late in the game? I would hope that the future of digital diplomacy is less naïve about digital technology and takes to heart the point that technology is not neutral and that tools are not just a given. 

Further, a digital diplomacy of the future also needs to do a lot more to address the digital divide between countries. As AI tools become more relevant in many sectors of the economy and in foreign policy, there is a real danger that countries with fewer capacities to develop and deploy the technology will face disadvantages and already existing gaps become wider. International organizations need to play a big role in addressing this and this might even be a task for the suggested International AI Agency. 

Lastly, meaningful conversations with tech companies – be it in the area of cybersecurity, content policy, or emerging technologies – need to intensify. The practice of tech diplomacy, that some countries have leaned into since Denmark appointed the first Tech Ambassador in 2017, is a good example. Tech diplomacy practiced in this way also needs to include conversations about the values and principles guiding digital technology. 

You have experience in diplomatic capacity-building and online training courses for policymakers and developing countries representatives. What is your advice for young students and practitioners working in international relations? What skills are needed today to secure the best future jobs?

KH: In my experience, the responses given to such questions often become obsolete very quickly. I don’t remember the exact advice my peers and I were given when we started university 20 years ago. But I can say for certain that none of it stood the test of time. Why is that? A lot of the advice was based on a simple calculation. First, you ask what specific skills and jobs are currently in high demand. Second, you identify existing training programs or develop tailored-made ones and point people there. This is great in the very short-term. New skills will be built and interesting experiences can be had. But it is not a useful long-term perspective.

For example, the release of ChatGPT has led to a huge public interest in generative AI and the use of similar tools. A lot of conversations started to revolve around the importance of being able to write prompts for these applications in order to get useful output. Guidelines on prompt writing sprung up like weeds after rain. Would I advise young students and practitioners to focus on becoming good prompt writers? It is certainly interesting to learn more about this and experiment with prompts for generative AI, but I doubt this in itself will future-proof your career. 

So, given my experiences with training and capacity building in digital diplomacy and related fields, what advice is left to give? I think it is very important to acknowledge that everyone’s situation will be different. But if we take a bird’s eye view, three points are worth emphasizing. 

First, regardless of your background, you need to develop a critical literacy when it comes to digital technology. By this I mean a basic understanding that allows you to ask critical questions, investigate the opportunities and risks of a given technology, understand power dynamics and potential harms, and find ways to meaningfully integrate new tools into your work. It is worth emphasizing that the goal of this critical literacy is not limited to the individual. Essentially, it is about preserving core human values, while making the best of the tools that we already have and the tools that can be developed in the future. 

Second, if you are a generalist by nature, do your best to preserve this in a world that demands increasing specialization. The philosopher Isaiah Berlin distinguished between two intellectual types: the fox and the hedgehog. Hedgehogs are motivated by a single idea and tend to have very focused and narrow interests that they explore to great depth. Foxes are driven by multiple ideas, have various interests, and explore on a broad scale, being interested in how these various aspects can fit together. Of course, any such categorization is to be taken with a grain of salt and essentializing people like this, something Berlin did not intend with his essay, also has its dangers. But the point I want to make is that if you feel like you are a fox, don’t force yourself to be a hedgehog. It will be important to find training programs and institutions that can support the “fox-nature”. My personal contention is that the world needs more visible foxes and that the drive towards specialization in our education systems and institutions needs a counterbalance. 

…boundary spanning involves communication skills but also the ability to understand disciplinary boundaries and act across those boundaries. (...) [It] is about building networks and maintaining sustained collaborations across disciplines or professional fields.

Third, there is a concept called boundary, spanning from the field of science diplomacy, which I have come to appreciate a lot. Some describe boundary spanners as the individuals that “straddle the divide between information producers and users” and “interfaces between a unit and its environment”. In the field of science diplomacy, boundary spanners are those individuals and institutions that “bridge the policy and the scientific spheres in order to facilitate research uptake and increase policy impact”. Broadly speaking, boundary spanning involves communication skills but also the ability to understand disciplinary boundaries and act across those boundaries. It is not just the exchange of knowledge across “divides”, it is about building networks and maintaining sustained collaborations across disciplines or professional fields. DiploFoundation offers an online course on science diplomacy and boundary spanning was one of the topics that resonated most with participants – those that came from the world of science and those that came from the world of diplomacy. On the theme of technology and international politics in the digital age, I think that boundary spanning is at the core of solving some of the most important issues related to AI and other emerging technology. 

Interview granted through written medium on September 10, 2023.

Copyright  ©  2023  CEBRI-Journal.  This  is  an  Open  Access  article  distributed  under  the  terms  of  the  Creative  Commons  Attribution  License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original article is properly cited.