On April 2nd, Zahra Ahmadi and Nathan Lloyd participated in a Graduate Poster and Demo Showcase hosted at Ontario Tech University, organized by the Computer Science Graduate Association, and colocated with the Faculty of Business and Information Technology’s Capstone Showcase.
We have had the pleasure of having Marieke van Otterdijk, a Doctoral Research Fellow from the Centre for Interdisciplinary Studies in Rhythm, Time and Motion at the University of Oslo join us over the past fortnight.
Through our continued partnership with the Canadian National Institute for the Blind (CNIB), Trustworthy AI Lab members Zahra Atf and Zahra Ahmadi lead a session and discussion on accessible AI at CNIB’s Co-Design Festival.
We are excited to announce new additions to the team!
Welcome Dr. Stanard Pachong who will be investigating machine learning techniques to analyze digitized blood stain patterns in partnership with the Forensic Chemistry & Materials Lab at Ontario Tech.
On March 6th 2024, Arsh Chowdhry presented his reasearch entitled: Discovering Trade-offs between Fairness and Accuracy in ML systems: A Multi-objective approach to a regional AI community group based in Birmingham, UK; BrumAI.
Ontario Tech’s Trustworthy AI Lab and Digital Life Institute will be co-hosting a Spring Speakers Forum event titled ‘Creative AI?’
Historically, creativity has been judged according to its impact and ways that people moved other people’s thinking, challenged longstanding beliefs, or transformed a field. Artificial Intelligence has raised sensationalized debates concerning not only the question over its ability to create, but whether it can harm society. Do we trust AI? Can AI systems really be creative? Do we risk human creativity as we adopt AI? Will our communities be sustainable? The goal of this event is to highlight three points of view concerning artificial intelligence and creativity.
Dr. Peter R. Lewis and Doctoral student Nathan Lloyd attended the 5th Workshop on Self-Awareness in Cyber-Physical Systems (SelPhyS), where Nathan presented their coauthored work in a poster session, short talk, and subsequent panel.
Dr. Peter Lewis chaired the panel discussion at the Workshop on Responsible Language Models (ReLM) at the 38th Annual AAAI Conference on Artificial Intelligence, on responsible adoption of language models.
The panel explored how industry and academia can collaborate on the responsible deployment and use of language models, how LLMs are raising new challenges in putting responsible AI frameworks into action, and some of the concerns that emerge when users’ expectations about AI inevitably do not always align with the capabilities of the machine. The panel featured speakers from Google, Microsoft, Borealis AI, Roche, as well as from academia.
Trustworthy AI Lab Director and Canada Research Chair Dr. Peter Lewis was featured as an expert speaker on a recent webinar by Lancaster House, on the impact of AI on work and labour relations.
Trustworthy AI Lab researchers, along with international collaborators, have published two articles in the latest issue of IEEE Technology and Society Magazine.
The first, Human Centricity in the Relationship Between Explainability and Trust in AI, with Zahra Atf, discusses reasons why explainable AI does not always lead to increases in trust, and sometimes actually the reverse.