Media Contact: Carmen Ramos Chandler, carmen.chandler@csun.edu, (818) 677-2130
Whether we like it or not, artificial intelligence (AI) is here to stay.
As long as we understand its limitations, California State University, Northridge psychology professor Ellie Kazemi said AI has the potential to help bridge social service and education gaps in underserved communities.
“If you’re a therapist, if you’re a social worker working with individuals in poor and underrepresented communities, the reality is you are probably overworked, juggling a heavy caseload and lack the resources to truly meet the needs of your clients,” Kazemi said. “AI can help improve clinical documentation and measurement, while at the same time saving valuable time by developing individualized activities for your clients. The same is true for educators, who are always trying to find ways to meet the individual needs of their students.
“But AI is only as good as its input,” she said. “Some people have a fear of new things or worry that AI will replace a human workforce. I don’t think that’s going to happen. There’s clear evidence that no matter how much power and how much analytics we feed into these systems, they will still need humans to learn about the human world, and that education is ongoing, because humans are unique and always evolving and changing.”
Kazemi, who leads an effort out of CSUN’s College of Social and Behavioral Sciences to help faculty understand how to utilize AI and virtual reality in their classrooms, noted that AI has been readily adopted in the retail and marking worlds, with businesses using algorithms to track buyers’ purchases to create better shopping experiences or push the sale of a particular product.
“But the reality is that AI can improve our lives in much less commercial and more personal ways,” she said.
Kazemi uses AI and virtual reality to train graduate psychology students how to work with recalcitrant and angry clients before they enter a clinical setting and work with real patients.
When it comes to therapy, community-based nonprofits and education, she said AI can help develop individualized plans for patients, clients and students or marketing plans and outreach efforts, freeing up valuable time and resources that could be spent with clients and students or facilitating healthier work/life balances.
She pointed to a fellow faculty member who developed a chatbot that helped her reach students in her class who were reluctant or unable to come to her office hours when they needed help.
“Now she, or the bot, can be there for them when they need it, not just when their professor has the time,” Kazemi said. “But her students are in an upper-division class. The same system may not work for freshmen or first-time college students. That’s one of the drawbacks of AI. Just like any other tool, it’s not a one-size-fits all solution. Like any other tool, the person using it needs to understand who they are trying to help and then how to adapt the tool to that person’s or group’s specific needs.”
It’s in making those adaptations, Kazemi said, that AI users need to be aware of its limitations. She pointed to a recent graphic she asked AI to develop for a presentation. She asked for an image featuring a biracial child and parent working with a therapist. She did not specify gender. What she got was a misaligned image with two Black children and a Black woman.
“What you are going to get is only as good as the data that is entered, and right now, a lot of the information being fed in AI is homogeneous and does not reflect the nuances of human life, including its diversity,” she said.
Kazemi said she understands some of the reservations surrounding AI.
“Humans have long history of being afraid when new technology is introduced,” she said, pointing to the calculator as an example. “People worried about what would happen if it was used in the classroom. Would it impact students’ tests? Would it impact math itself? They are excellent questions, and a great example of how we cannot envision the changes new technology can bring.
“Ideally, our society would have time to consider, discuss openly, AI parameters, guardrails, ethics and the other issues that people are worried about,” Kazemi said. “But we don’t. And the reality is, we never have had that time when new technology is introduced. We move, we experience and then we correct it. I think a lot of the fear around AI is that we’re dealing with enormous growth and enormous capacity. The thing is, whether we fear it or not, whether we would like more time to think about those guardrails or not, it’s here.
“We need to educate ourselves about AI and think about what AI can and cannot do, and what we want it to do and not to do,” she said. “Since it is here, we need to learn how we can use it to make our lives better.”
Comments are closed.