Latest News

Zahra Atf presents Evaluating the Trustworthiness of User-Generated Content on Social Media at ISTAS2024

On Friday the 20th of September, Zahra Atf presented research entitled Evaluating the Trustworthiness of User-Generated Content on Social Media at the IEEE International Symposium on Technology and Society (ISTAS 2024). The theme of this years ISTAS was on the Social Implications of Artificial Intelligence (AI), Zahra’s work, co-authored by Peter and Nathan, examines the psychological and content-based factors that affect the trustworthiness of UGC on a food brand’s Instagram page, based on data analysis spanning seven years.

Zahra Atf presents Evaluating the Trustworthiness of User-Generated Content on Social Media at ISTAS2024
Trustworthy AI Lab members and affiliates present research at ACSOS2024

From the 16th-20th of September, members of the Trustworthy AI Lab attended the 5th IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS2024).

On Thursday, Dr Lewis was joined on the Expert Panel by Dr Amanda Muller, Chief of Responsible Technology at Northrop Grumman, and Dr Jeremy Pitt, Professor of Intelligent and Self-Organising Systems at Imperial College London, to discuss ‘Building trust in self* systems’. Trust had been a theme running through the conference, and this Expert Panel explored the theory and practice of trust and trustworthiness, between humans and machines, and between machines. A key theme being `trust calibration’, how to empower people to make well-informed judgements about the trustworthiness or otherwise of complex Ai and other socio-technical systems.

Trustworthy AI Lab members and affiliates present research at ACSOS2024
Congratulations Arsh on your MSc Defence!

Congratulations to Arsh Chowdhry on successfully defending his master’s thesis today!

In his research, Arsh developed a novel method for training classification models using multi-objective optimization, such that the resulting models can be simultaneously accurate and fair. He demonstrated that in many cases, taking account of fairness explicitly during the training process through multi-objective optimization means that high accuracy can be achieved at the same time as fairness - something that often doesn’t occur using traditional training methods. In other cases, the approach reveals a trade-off between accuracy and fairness, meaning that decision-makers can choose how to balance the competing priorities of prediction accuracy and different types of fairness when selecting which model to deploy and in a way that is specific to their context.

Congratulations Arsh on your MSc Defence!
Stage Completion: Socially self-aware MRS

The 9th of August marked the end of the first stage of the `Socially self-aware Multi-Robot System project'.

Over the span of 10 weeks, Aditya and Nick took great strides towards our research objectives. The first stage of the project saw the completion of numerous key deliverables, chiefly, domain knowledge transfer (ROS2 and Turtlebot4) via documentation and reports, and the development of self and other perceptive capabalities for speed, size, and risk; utilizing multiple modalities; SLAM, object detection, and depth perception.

Stage Completion: Socially self-aware MRS
Congratulations on your MSc defense Zahra!

A huge congratulations to Zahra Ahmadi on successfully defending her Master’s thesis!

Zahra’s research produced comprehensive and systematic evidence that most research into new AI tools for people with disabilities is sadly not evaluated with the communities it is designed to help, and importantly, that this is leading to unidentified risks and failures that can cause harm. Further, she proposed TACTIC, a well-defined best-practice process to carry out inclusive and accessible research & development, in partnership with relevant communities. The hope is that by following TACTIC, research can better focus on real barriers experienced by people with disabilities, and that many of the harms can be avoided by empowering people to understand risks and failures associated with assistive technology better.

Congratulations on your MSc defense Zahra!
Nathan presents research on social expectations and Peter joins an art panel at ALife2024

Dr. Peter R. Lewis and Ph.D. Candidate Nathan Lloyd attended the International Conference for Artificial Life (ALife) which brings together researchers invested in the synthesis and simulation of living systems.

On July 24th, Nathan presented work in the ALife & Society special session entitled Incorporating Social Expectations into the Expectation Event Calculus. This paper bring theory from social psychology to enhance the intelligence of AI to practice - demonstrating novel agents that can model and reason using social expectations, prerequisites to social norms.

Nathan presents research on social expectations and Peter joins an art panel at ALife2024
Parisa Salmani presents at BrumAI

On July 17th, following a successful Master’s defense, Parisa presented her research entitled: Bias and Fairness in Transfer Learning to an AI comunity group based in Birmingham, UK; BrumAI.

Parisa presented virtually to over 90 participants, receiving great feedback from the attendees. Parisa’s talk has been made publicly available here.

Parisa Salmani presents at BrumAI