As part of this research I have conducted both literature research and a series of interviews with fellow students and experts in the field of AI. Based on this, I have written this short blogpost, in which I will discuss this topic and some potential solutions to this problem.
Blackboxing of technology is a concept that refers to the lack of knowledge and transparency of the processes within technologies. As Bruno Latour famously put it in his 1999 book Pandora’s Hope, blackboxing can be seen as:
“the way scientific and technical work is made invisible by its own success"
What the author meant by this is that when a machine runs efficiently and without error, there is no need to focus on the complex processes within, leading to an obscuration of these processes. When applied to algorithms or AI, it means that it is unclear how the machine operates or comes to the output it gives. Think for instance of Google, which gives you search results without you understanding how the algorithm gathered these specific results, or the social media recommendations that you see without any explanation as to why they are recommended for you. These are a few straightforward examples that many of us likely encounter daily. Blackboxing is thus not an obscure issue within computer sciences, but something that affects us all, often without us realizing it.
From this we can see that blackboxing can be quite problematic. Luckily there are also a number of things that can crack open the blackbox and shed light on the processes of AI. This could take the form of Explainable AI, a set of methods and tools which are intended to uncover the blackbox of AI by measuring and visualizing the processes of AI. While these are no doubt important measures to understand AI, it does seem only suitable for those with expertise in the field of AI and thus not for the general public. As already discussed earlier it is important to recognize that the general public does not have a full understanding of what AI entails, where it is being applied, and what effects it may have on their daily life. In order to truly crack open the blackbox of AI, this basic understanding of the technology needs to be gained. This can be done mainly through education on AI, either in the form of formal education, better representation of AI in the media, or through initiatives by NGOs or artists, like Richard Vijgen's art installation in the image above.
Additionally increased transparency from those controlling the AI is required, whether it be through enforced regulations or more ethical AI practices. Education and transparency through these methods does not only help with understanding what AI is, but can also be used to teach people about the processes of AI, thus cracking the blackbox open for the general public and ensuring more mindful and safe usage.