A huge congratulations to Zahra Ahmadi on successfully defending her Master’s thesis!
Zahra’s research produced comprehensive and systematic evidence that most research into new AI tools for people with disabilities is sadly not evaluated with the communities it is designed to help, and importantly, that this is leading to unidentified risks and failures that can cause harm. Further, she proposed TACTIC, a well-defined best-practice process to carry out inclusive and accessible research & development, in partnership with relevant communities. The hope is that by following TACTIC, research can better focus on real barriers experienced by people with disabilities, and that many of the harms can be avoided by empowering people to understand risks and failures associated with assistive technology better.
Dr. Peter R. Lewis and Ph.D. Candidate Nathan Lloyd attended the International Conference for Artificial Life (ALife) which brings together researchers invested in the synthesis and simulation of living systems.
On July 24th, Nathan presented work in the ALife & Society special session entitled Incorporating Social Expectations into the Expectation Event Calculus. This paper bring theory from social psychology to enhance the intelligence of AI to practice - demonstrating novel agents that can model and reason using social expectations, prerequisites to social norms.
On July 17th, following a successful Master’s defense, Parisa presented her research entitled: Bias and Fairness in Transfer Learning to an AI comunity group based in Birmingham, UK; BrumAI.
Parisa presented virtually to over 90 participants, receiving great feedback from the attendees. Parisa’s talk has been made publicly available here.
First, a huge congratulations to the now Parisa Salmani MSc for successfully defending their thesis entitled Bias and Fairness in Transfer Learning.
Parisa’s research demonstrated, for the first time, that the use of transfer learning (a common technique used in all sorts of applications of machine learning) can lead to the introduction of new demographic bias and hence introduce discrimination against particular sub-populations. Parisa demonstrated this across two domains when comparing models that were trained from scratch to perform the same task.
On June 3rd, Aditya Ravi and Nicholas Lee joined the Trustworthy AI Lab as developers on the socially self-aware multi-robot system project. Aditya and Nicholas will be working together to explore the themes raised in prior research, to develop socially self-aware robots with TurtleBot 4s.
On May 27th and 28th, Zahra Ahmadi and Joelma Peixoto had the incredible opportunity to attend the Accessible Canada - Accessible World / Un Canada accessible - Un monde accessible Conference. This groundbreaking event was an instructive experience for advancing accessibility and inclusive design across various sectors.
We’re excited to announce that two new graduate students are joining the Trustworthy AI Lab.
First, an official welcome to PhD student Andrew Putman. Andrew joined the lab earlier this year while completing their master’s degree from the Faculty of Health Sciences, here at Ontario Tech. Andrew’s primary research interests center around equity in healthcare decision-making, both human and algorithmic.
Zahra Ahmadi has been featured in the Canadian National Institute for the Blind (CNIB) recorded discussion to honour #GlobalAccessibilityAwarenessDay (GAAD) in a meeting with almost 150 participants! Akriti Pandey, of CNIB, and Zahra Ahmadi had an insightful discussion on the importance of inclusive and accessible AI technologies, existing gaps and limitations associated with AI-based assistive technologies.
Trustworthy AI Lab Director and Canada Research Chair Dr. Peter Lewis was featured as an invited panellist at a recent public event on ‘AI and the Future of Education’, with a focus on post-secondary education, hosted by the Enoch Tuner Schoolhouse Foundation.