Research exposes unintended consequences of AI for consumers
Distinguished Research Professor and Fellow of the Royal Society of Canada, Russell Belk, has carved a unique niche in academia by delving into artificial intelligence (AI) in a very different way, with the consumer at the centre.
The conclusions of his most recent article, “Machines and Artificial Intelligence" published in the Journal of Marketing Behaviour (2019), are thought-provoking if not foreboding: “The ultimate concern for future generations may not be that we pale intellectually and physically in comparison to the machines we’ve created but rather that we suffer economically at the hands of such machines.”
The article unpacks how we got here, digging into a myriad of topics and profound philosophical questions. “My goal is to stimulate marketing and consumer research into related issues, including the possible results of our current and future engagement with technology,” Belk says.
He is an incredibly prolific academic. His annual list of publications is long, varied and extends beyond conventional consumerism. He digs much deeper to ponder the meaning of possessions, materialism, collecting, sharing, etc.
This work is qualitative, interpretive and cultural in nature. “In a consumer society, our ideas about ourselves are often bound up or represented in what we desire, what we own, and how we use these things,” he explains.
Belk considers our relationship to machines
This article begins with a philosophical examination of humans and our tools – implements that differentiate us from animals and also become extensions of ourselves. Examples of smartphones and laptops are particularly apt.
The tools we create are powerful and fast. We humans then add intelligence, creativity and problem-solving to the mix. But what happens when the machines dip into our domain, when technology threatens to slip from human control? “There is a lingering fear that our machines may out-do, out-smart and out-power us,” Belk says.
He considers our engagement with technology and how it entails a sense of ownership, rights and responsibilities. But in the case of a humanoid robot or self-driving car, these tools may have (or develop) their own rights and responsibilities. (Machine/robot ethics is a blossoming field.)
Belk points out that as our machines become more human-like, we become more machine-like. “We magnify our capabilities with hand-held computers, we replace our body parts with prostheses and we may soon modify our genes to procure additional benefits for ourselves and our progeny, including an extended lifespan,” he says.
Looks at issue from moral, existential lens
Belk doesn’t believe there will be a robot rebellion, but instead a series of small concessions. His article profiles developments and speculations involving computers, algorithms, AI, robots, cyborgs, transhumanism, posthumanism and more.
In this, he broadens the discussion to consider what it means to be human and what it means to be a machine, and the idea of a desired extended lifespan. He does this using four lens: the sacred, the moral, the societal and the existential.
Topics for future research: from driverless cars to robots for sex
After this fulsome analysis, Belk introduced vital areas of future consumer research. “These topics bear on future consumer well-being and perhaps even human survival,” he emphasizes.
He believes that the robots entering our homes, streets and factories are problematic. First, there’s the loss of employment. Driverless vehicles, for example, will mean that truck drivers lose their vocation, as will taxi, Uber and Lyft drivers.
Similarly, robots used in retail spaces, hotels, nursing homes and hospitals pose a problem. How willing would consumers be to interact with and trust robots in these settings? Belk wonders. For this reason, robots are given human-sounding voices, and the ability to detect and respond to human emotions.
Interestingly, Belk points out that robots in some cultures are deemed more trustworthy than in other cultures. In Japan, the idea of a robot caring for elderly person is more accepted than in the West.
Then there’s morality. Robots being programmed to act on moral grounds poses another set of issues. How would it work and what if it malfunctioned? What would happen if an autonomous military weapon, or robot-soldier, committed a criminal act, or a self-driving car killed a pedestrian? Who would be held accountable?
Then there’s the thorny issue of robots for sex. Some see this as the further dehumanizing of women; others interpret this as a cure for loneliness or a disease-free form of prostitution.
As Belk raises these key questions, he also warns us of the dangers. “It would be nice if robots and AI could be harnessed for the good of humankind, to eliminate poverty and provide a life of leisure for us all,” he says. But he fears that a few wealthy entrepreneurs would further divide the world into haves and have-nots, reinforcing inequity. In this article, Belk underscores the irony: We may suffer at the hands of the machines we made to improve our lives.
To learn more about Belk’s work, visit his faculty profile page. To read the article in the Journal of Marketing Behaviour, visit the website.
To learn more about Research & Innovation at York, follow us at @YUResearch; watch our new animated video, which profiles current research strengths and areas of opportunity, such as Artificial Intelligence and Indigenous futurities; and see the snapshot infographic, a glimpse of the year’s successes.
By Megan Mueller, senior manager, Research Communications, Office of the Vice-President Research & Innovation, York University, firstname.lastname@example.org