“Trust is Dynamic”: A Conversation with Chenzhao Li on Human-AI Trust
Authors: Leslie Nghiem & Sneha Choudhury
Meet Chenzhao
Chenzhao Li is a Master’s student (Interdisciplinary Studies) at the Human-AI Collaboration (HAIC) Research Lab at the University of New Brunswick, supervised by Dr. Yu. Her academic background is in education, with training in educational psychology and a focus on how people think and make decisions. As AI systems become more common in daily life, she became intrigued by the way people interact with them, especially the tension between their impressive capabilities and occasional mistakes.
Her everyday experiences, including noticing differences in driver-assistance systems while driving rental cars, further sparked her curiosity about how humans develop trust in technology. This interest ultimately led her to study trust in human–AI interaction.
Overview: What the project is about
Chenzhao’s project explores how trust in AI shifts depending on context, past experience, and stakes. A person might follow a navigation suggestion without thinking, yet be highly skeptical when AI offers medical advice. The aim is not simply to increase trust, but to understand when and why trust rises or falls so developers can build systems that help people collaborate with AI more safely and effectively.
Methods & Ethics: How the research was done
The team blends controlled experiments, surveys, and behavioral data by asking participants directly about trust and observing what they actually do (e.g., accept or override AI suggestions). Importantly, participant data is anonymized and stored on secure university servers; informed consent and university ethics guidelines are followed throughout. As Chenzhao explains, they “combine what people say with how they behave.”
Key Findings: What surprised us
● Trust is dynamic. It changes with context and prior experience, rather than being a fixed trait.
● Context matters most. People treat the same AI differently depending on the task, for example entertainment vs. life-or-death decisions produce wildly different trust behaviors.
● Experience shapes reaction. Experts may undertrust systems they think they can outperform; novices may overtrust them.
A direct takeaway from the studies: “Trust is dynamic, it is not stable.”
Practical Takeaways: How this matters for everyday AI
The research highlights a few key lessons for how we use AI in daily life. One important takeaway is that AI systems should adapt their level of explanation depending on the situation. In high-stakes scenarios like healthcare or driving, users need clearer explanations to understand how the AI reached its recommendations.
Developers can also play a role in encouraging responsible use. Small reminders that AI can make mistakes can help users stay aware and avoid over-relying on the technology. As quoted from the researcher “It’s important to keep your critical thinking, don’t just accept whatever AI tells you, but question it and double-check important information.”
Looking ahead, trust in AI may evolve as younger generations grow up with these systems everywhere from chatbots to smart devices and social media platforms. For them, using AI will feel natural, much like using Google did for earlier generations. With everyday exposure, they may develop a better instinct for when to rely on AI and when to verify it, ultimately learning to treat it as a helpful assistant without overtrusting or undertrusting.
Next Steps: Where the research goes from here
The team plans to run more experiments and expand into long-term studies, with a particular interest in autonomous vehicles, a domain where trust has direct safety implications. The researcher notes a desire to study how trust evolves over weeks or months of real-world use rather than single lab sessions.
Personal Reflection: How the project changed her own AI use
Working on the research made her more self-aware about when she trusts AI and when she double-checks it. She uses AI confidently for brainstorming and ideation, but practices extra caution when the stakes are high which is a habit she developed through the research itself. As she says, it has “made me more aware of my own trust patterns.”
Statement on Blog Authorship
While working on this interview blog, Leslie and Sneha brainstormed interview question ideas first, and then used AI to get ideas on a few more questions. AI was then used to organize these questions into sections and to format the wording of them. Leslie and Sneha conducted the interview and made notes to write the first draft of the blog. Which was later refined using AI to ensure the flow and language were enhanced.
Personal Reflection from Sneha
As a minimal user of AI in my personal life, I have noticed the change in my trust dynamics with AI. As a student, I would sparingly use AI due to the fear of plagiarism or hallucinated content generated by AI. But now I use AI when I need help organizing my thoughts or when planning something. I am wary of giving any personal information to AI and always try my best to do something myself before relying on AI.
Personal Reflection from Leslie
I find it inevitable to not evolve and grow together with artificial intelligence nowadays, no matter how a person denies the use of it; the act of consuming any means of media made by AI is secondary use. I mostly work hand in hand with AI in my life from personal to professional, and the key factor that keeps you humane is to work with it with your own awareness, mind and critical thinking. The reliance on AI is growing, and the only thing that could keep you aware is your own logic and self-regulation.