More and more each year, we’re being told to trust AI
With the rise of ChatGPT, Stable Diffusion, self-driving cars, and smart homes, it seems like only a matter of time before we walk straight into a world straight out of Demolition Man.
But are we really over-reliant on our artificial intelligence counterparts? How well do humans get along with AI-teammates — and how does that change when they realize that their AI-counterpart isn’t as infallible as they’ve been lead to believe?
These are the questions the SHINE Lab at the University of Colorado Boulder is looking to address.
As a research assistant, it was my job to help run an experiment designed to see how participants collaborated with each other and with an AI. To better understand user trust, we utilized a brain imaging device called a functional near-infrared spectroscopy (fNIRS) which measures blood flow to various parts of the brain. By manipulating the reliability and the transparency of the AI’s actions, we hoped to demonstrate how effective users are at trust calibration. If users correctly calibrate their trust, they can give the AI more autonomy when reliability is high; an incorrect calibration and users can end up trusting AI even when it is underperforming the tasks assigned to it.
The work in this lab inspired my love for research and helped underscore the importance of mentorship, generosity, and kindness in professional careers. Thanks to my co-workers at this lab, I was able to get my first paper published in Theoretical Issues in Ergonomics Science and meet one of the leading experts in human factors.