How to open the blackbox of artificial intelligence
The lack of knowledge and common misconceptions of artificial intelligence are partially due to the blackboxing of AI. Master student Stan van Bommel has been researching the topic of the blackboxing of artificial intelligence as part of his internship at Beeld & Geluid and his Master’s thesis. This blog is the first of a two-part series about AI.
The lack of knowledge and common misconceptions of artificial intelligence are partially due to the blackboxing of AI. Master student Stan van Bommel has been researching the topic of the blackboxing of artificial intelligence as part of his internship at Beeld & Geluid and his Master’s thesis. This blog is the first of a two-part series about AI.
As part of this research I have conducted both literature research and a series of interviews with fellow students and experts in the field of AI. Based on this, I have written this short blogpost, in which I will discuss this topic and some potential solutions to this problem.
Blackboxing of technology is a concept that refers to the lack of knowledge and transparency of the processes within technologies. As Bruno Latour famously put it in his 1999 book Pandora’s Hope, blackboxing can be seen as:
“the way scientific and technical work is made invisible by its own success"
What the author meant by this is that when a machine runs efficiently and without error, there is no need to focus on the complex processes within, leading to an obscuration of these processes. When applied to algorithms or AI, it means that it is unclear how the machine operates or comes to the output it gives. Think for instance of Google, which gives you search results without you understanding how the algorithm gathered these specific results, or the social media recommendations that you see without any explanation as to why they are recommended for you. These are a few straightforward examples that many of us likely encounter daily. Blackboxing is thus not an obscure issue within computer sciences, but something that affects us all, often without us realizing it.
You may then be wondering what the effects of this are on people. According to the interviews I have conducted, blackboxing leads to a lack of understanding of how AI works and when it is being used. Even the experts in the field of AI - despite having better knowledge on the topic - claim to not fully understand the processes behind AI, especially not when it comes to the secretive AI practices of big tech companies. This lack of knowledge or understanding is often said to lead to feelings of uncertainty and mistrust as people don't know what the AI is doing with their data or how the AI is affecting them when browsing the web for instance. Additionally blackboxing can lead to a lack of accountability for companies that own the AI, difficulties in controlling and regulating AI and increasing digital divides in society.
From this we can see that blackboxing can be quite problematic. Luckily there are also a number of things that can crack open the blackbox and shed light on the processes of AI. This could take the form of Explainable AI, a set of methods and tools which are intended to uncover the blackbox of AI by measuring and visualizing the processes of AI. While these are no doubt important measures to understand AI, it does seem only suitable for those with expertise in the field of AI and thus not for the general public. As already discussed earlier it is important to recognize that the general public does not have a full understanding of what AI entails, where it is being applied, and what effects it may have on their daily life. In order to truly crack open the blackbox of AI, this basic understanding of the technology needs to be gained. This can be done mainly through education on AI, either in the form of formal education, better representation of AI in the media, or through initiatives by NGOs or artists, like Richard Vijgen's art installation in the image above.
Additionally increased transparency from those controlling the AI is required, whether it be through enforced regulations or more ethical AI practices. Education and transparency through these methods does not only help with understanding what AI is, but can also be used to teach people about the processes of AI, thus cracking the blackbox open for the general public and ensuring more mindful and safe usage.