Scottish AI Summit 2022

EVENTBRITE

Last week I participated in the first-ever Scottish AI Summit in Edinborough at the Dynamic Earth venue. It was my first experience with the conference and wanted to share my thoughts on the topics discussed as well as the event itself.

Conference Agenda

The three main topics of the summit were Trustworthiness, Ethics and Inclusivity in AI. These were a base for the discussion for eight distinct panels and 3 workshops. As it wasn’t possible to attend all of them, I had to choose which one will be the most relevant for my PhD and interests. While I could spend hours discussing all of them, I will share with you the ones that were the most insightful from my perspective.

Panel 5: Why is Explainable AI still a challenge?

Explainability in AI is becoming a very hot topic in this domain right now. Increasing access to the software and easily accessible programming packages, allowed researchers from non-programing backgrounds to employ and test machine learning models on their ideas. Furthermore, the rise of artificial neural networks has led to sophisticated methods being widely available. While they often provide superior results to traditional methods, they are associated with explainability issues, resulting from their complexity. This phenomenon is known as the ‘Black Box” model, which results often elude human interpretation. Rather than trying to explain the models, researchers focus on interpreting the results by isolating the important features. However, this approach still does not provide sufficient information on why and how the model makes the final decision, merely gives the idea of what is being considered.

Panel Discussions

Panellists consisted of AI experts from both academia and industry. While in some cases, the lack of explainability might not be an issue, for example, the Netflix recommendation algorithm, when it comes to the areas such as law enforcement or healthcare it poses a huge threat. Whether it’s academic researchers or companies, it is crucial to contextualize the AI system. Some applications might favour performance over explainability while some should not, according to one of the panellists, the importance of explainability is linked to human participation in the system. In other words, the system in which the final decisions are made by humans does not necessarily has to provide complete explainability, as human experts are only using it to aid their decisions. Another area in which explainability might not play an important role in, is the detection of rare, often catastrophic events, which otherwise wouldn’t be discovered soon enough (i.e., nuclear reactor faults).

My Views

While I can definitely agree with the latter case, in which AI is the only to only tool at the disposal of the human when it comes to lack of explainability in high-stake sectors such as law enforcement or healthcare, not being able to understand the results of the AI system, might lead to terrible consequences. For one, people might begin to rely on the system too much, and while it might provide satisfactory results at the beginning, not being able to understand its decision process, effectively does not improve the understanding of the specific disease or legal verdict. If sometime in the future the system would come out to be faulty, people would demand compensation, court judgements would have to be revoked additionally stigmatising the AI for years. I agree with panellists, that there is no “one fits all” when it comes to the approach to the explainability in AI. Each case is different, nevertheless keeping the human in the loop could limit the risks coming from lack of explainability.

Workshop 2: What does Responsible Innovation Mean to You?

The main reason I attended this workshop, was that I wanted to see how it differed from the panel discussions. To my surprise the conversations there were much more interactive, with participants having a chance to talk with each other and vote on the questions raised.

Discussion

Again people from the industry and academia were invited and in this case, I was able to explore approaches to responsibility in AI from different perspectives. It was interesting to see how aspects such as ethics, transparency, inclusion and accountability align across all of these different parties. Speakers talked about how the approach to responsibility in AI changed over time, with people now taking a more proactive approach.

My views

While I consider the aspect mentioned in the previous paragraph important to every AI system. I think that the speakers failed to discuss the issue of participation, which I believe is essential to the well functioning of the AI system. Without participation, there is no reason to work on all of the remaining as there is no one to benefit from it. While these concepts become highly interconnected later in the process, I think more has to be done to ensure people are not afraid of AI and are able to trust it, especially older generations.

General Remarks

I very much enjoyed the summit and it was amazing to meet important people from the industry and catch up on the latest developments in this area. It was also a great place to meet new people and establish new contacts that might be relevant to my research. I was able to learn about the Scottish companies working within the AI domain and the talks I participated in gave me some interesting ideas on how to proceed with my research.


All of the talks mentioned in this post along with the original recordings can be found on the summit’s website here. All you need to do is to log into their virtual platform and you will have access to all of the materials online for the next five months for free.

Leave a Reply

Your email address will not be published. Required fields are marked *