De-escalating robocops? York study imagines future of crisis response 

Robotic hand reaches for human hand

By Corey Allen, senior manager, research communications

Picture this: a 911 operator in your city receives a call from a person in mental distress and needs to send help.  

They could dispatch the police or an integrated unit of both police and mental health professionals. But instead, the operator sends a robot.  

This scenario may sound like science fiction, but it’s the kind of futuristic thinking that has researchers at York University considering all angles when it comes to artificial intelligence (AI) and crisis response.   

Building more empathetic bots through interdisciplinary research  
Kathryn Pierce
Kathryn Pierce

In a paper published in Applied Sciences earlier this year, psychology PhD candidate Kathryn Pierce and her co-authors explore the potential role robots could play in crisis de-escalation, as well as the capabilities engineers would need to program them to be effective.    

The visionary paper is part of a larger project at the Lassonde School of Engineering that involves early-stage research to design and test robots to assist in security and police force tasks. The York engineers asked the psychology researchers to provide their social scientific lens to their forward-thinking work on humanizing machines.  

“De-escalation is not a well-researched topic and very little literature exists about what de-escalation really looks like moment by moment,” says Pierce, who is supervised by Dr. Debra Pepler, a renowned psychologist and Distinguished Research Professor in the Faculty of Health. “This makes it difficult to determine what kinds of behavioural changes are necessary in both responders and the person in crisis to lead to a more positive outcome.”   

No hard and fast rules for de-escalation, for both humans and robots  

With limited academic understanding of what really happens in human-to-human interactions during a crisis response, let alone robot-to-human, training a robot to calm a person down poses an incredibly tall task.  

Despite the challenge, Pierce and her co-authors were able to develop a preliminary model outlining the functions a robot should theoretically be able to perform for effective de-escalation. These functions are made up of verbal and non-verbal communication strategies that engineers would need to be mindful of when building a robot for such a task.    

Some of these strategies include a robot’s gaze – the way a machine and human look at one another – the speed in which they approach (slow and predictable), and the sound and tone of their voice (empathetic and warm).  

But, as the researchers point out, ultimately, robots cannot be “programmed in a fixed, algorithmic, rule-based manner” because there are no fixed rules for how people calm each other.   

“Even if there were algorithms governing human-to-human de-escalation, whether those would translate into an effective robot-to-human de-escalation is an empirical question,” they write.  

It is also difficult to determine whether people will react to robots emulating human behaviour the same way they would if it was an actual person. 

Advances in AI could add new layer of complication to the future of crisis response  

In recent years, the use and discussion of non-police crisis response services have garnered growing attention in various cities across North America, and elsewhere in the world.  

Advocates for replacing traditional law enforcement with social workers, nurses or mental health workers – or at least the integration of these professionals with police units – argue that this leads to better outcomes.  

Research published earlier this year showed that police responding to people in mental distress use less force if accompanied by a health-care provider. Another study found that community responses were more effective for crime prevention and cost savings.  

Introducing robots into the mix would add to the complexity of crisis response services design and reforms. And it could lead to a whole host of issues for engineers, social scientists and governments to grapple with in the future. 

The here and now 

For the time being, Pierce and her co-authors see a machine’s greatest potential in video recording. Robots would accompany human responders on calls to film the interaction. The footage could then be reviewed for responders to reflect on what went well and what to improve upon.  

Researchers could also use this data to train robots to de-escalate situations more like their human counterparts.    

Another use for AI surveillance the researchers theorize could be to have robots trained to identify individuals in public who are exhibiting warning signs of agitation, allowing for police or mental health professionals to intervene before a crisis point is ever reached.  

While a world in which a 911 operator dispatches an autonomous robot to a crisis call may be too hard to conceive, Pierce and her co-authors do see a more immediate, realistic line of inquiry for this emerging area of research.  

“I think what’s most practical would be to have engineers direct their focus on how robots can ultimately assist in de-escalation, rather than aiming for them to act independently,” says Pierce. “It’s a testament to the power and sophistication of the human mind that our emotions are hard to replicate. What our paper ultimately shows, or reaffirms, is that modern machines are still no match for human intricacies.”  

Background  

The paper, “Considerations for Developing Robot-Assisted Crisis De-Escalation Practice,” was co-authored by Pierce and Pepler, along with Michael Jenkin, a professor of electrical engineering and computer science in the Lassonde School of Engineering, and Stephanie Craig, an assistant professor of psychology at the University of Guelph.  

The work was funded by the Canadian Innovation for Defence Excellence & Security Innovation Networks. 

Smart holidays: how AI can make the season brighter 

Two people wearing holiday-themed sweaters on smart phones

By Ashley Goodfellow Craig, YFile editor 

As the festive season approaches, incorporating artificial intelligence (AI) technology into holiday planning can transform the way individuals navigate what are often hectic preparations. For those unfamiliar with AI, or those hesitant to embrace it, understanding its practical applications can help give back the gift of time. 

Utilizing AI, such as ChatGPT, can serve as an invaluable tool in simplifying holiday tasks. Though some may have reservations about using AI due to concerns about privacy, or uncertainty about its capabilities, ChatGPT offers settings to help keep data secure such as turning off the “chat history and training” setting.

To protect your privacy while using ChatGPT, you can also take the following steps: 

Avoid sharing any personal information such as your name, address, phone number, email address or financial information, says Vidur Kalive, artificial intelligence architect lead for York University. Kalive also suggests using a virtual private network (VPN) to encrypt your internet traffic and hide your IP address. 

What is ChatGPT? It’s an AI-powered conversational assistant designed to understand and generate human-like text based on the input it receives. It is capable of generating text based on context and past conversations. It can answer questions and assist you with tasks such as composing emails, essays and code.

Set up with a user-friendly interface, ChatGPT was designed to be accessible even to those less familiar with technology. For holiday planning tasks, it can guide users through suggestions and solutions tailored to their individual needs. 

“A well-structured prompt will ensure that ChatGPT gives you useful, relevant information,” says Kalive. “Prompts are the inputs that you provide to ChatGPT to generate a response.” 

To write effective prompts, consider the following tips: 

  1. Write one or two lines describing the task at hand, its purpose, your intended audience and the output you need ChatGPT to generate. 
  1. Assign ChatGPT a role – as an identity or profession – to help guide its responses. It will then generate output based on the area of expertise related to the role you assigned to it.  
  1. Keep prompts concise and to the point. Long prompts can lead to irrelevant or incorrect responses.  

Example prompt: You are a chef who specializes in holiday cuisine. Create a Christmas dinner menu that includes a main course, side dishes and dessert. You will be serving eight guests. Two of them are vegetarians.  

Below are some examples of how AI, and tools like ChatGPT, can trim planning time for the holidays. 

Food planning: One significant advantage is ChatGPT’s ability to plan and organize shopping lists and create them from recipes. It will also offer recipes for anything from appetizers to desserts to festive drinks. It can curate recipes and meal plans tailored to dietary needs. Planning a cooking schedule is another helpful tool AI offers. 

Gift items: Using AI to generate gift lists based on preferences, budget and recipient details can inspire new ideas and save time shopping in crowds. 

Greetings and messages: With the right prompts for tone, AI can expertly craft personalized greetings and messages. Using AI-generated images, personalized holiday cards can also be easily designed. Crafting poems and creating stories and songs with a personal touch are also options for holiday-related AI content. 

Consider this short poem, generated by ChatGPT: 

At York University, the campuses are aglow,  
Faculty, staff and students in festive flow. 
Amidst winter’s charm and joyous sights, 
Shared moments shine in holiday delights. 

Decorating: Understand the latest decorating trends, or where to find specific decorative items, to make your space more festive. 

Holiday-themed activities: For those who enjoy outings, experiences and more, ask ChatGPT to summarize activities in the area or provide a few holiday-inspired games to play with family and friends. 

Travel: Those planning travel during the festive season can rely on AI assistants to generate itineraries, provide real-time updates on weather or traffic conditions, offer directions and even suggest offbeat local experiences. 

By harnessing the power of AI in holiday planning, individuals can streamline preparations, reduce stress and create more time to spend with friends and family.  
 

Using AI to enhance well-being for under-represented groups

A man meditating

Kiemute Oyibo, an assistant professor at York University’s Lassonde School of Engineering, is leveraging artificial intelligence (AI) machine learning to build group-specific predictive models for different target populations to promote positive behaviour changes.

Kiemute Oyibo
Kiemute Oyibo

From reminders to take a daily yoga lesson to notifications about prescription refills, persuasive technology is an effective technique used in many software applications. Informed by psychological theories, this technology can be incorporated in many electronic devices to change users’ attitudes and behaviours, including habits and lifestyle choices related to health and well-being.

“People are receptive to personalized health-related messages that help them adopt beneficial behaviours they ordinarily find difficult,” says Oyibo.

“That is why I am designing, implementing and evaluating personalized persuasive technologies in the health domain with a focus on inclusive design, and tailoring health applications to meet the needs of under-represented groups.”

By considering the specific needs of these groups, Oyibo’s work has the potential to change the one-size-fits-all approach of software application design. “By excluding features which may discourage some populations from using certain health applications and focusing on their unique needs, such as the inclusion of cultural elements and norms, personalized health applications can benefit users from marginalized communities,” he explains. “Another method that can help improve user experience is participatory design. This enables underrepresented groups, such as Indigenous Peoples, to be a part of the design and development of technology they will enjoy using.”

Through demographic studies, Oyibo is investigating the behaviours, characteristics, preferences and unique needs of different populations, including under-represented groups, throughout Canada and Africa. For example, he is examining cultural influences on users’ attitudes and acceptance of contact tracing applications – an approach that is unique for informing the design and development of public health applications.

“Group-specific predictive models that do not treat the entire target population as a monolithic group can be used to personalize health messages to specific users more effectively,” says Oyibo of his work, which is supported by a Natural Sciences and Engineering Research Council of Canada Discovery Grant.

In related work, Oyibo is collaborating with professors from Dalhousie University and industry partners at ThinkResearch to explore the application of persuasive techniques in the design of medical incident reporting systems, to improve their effectiveness in community pharmacies across Canada.

“There are a lot of near misses and incidents in community pharmacies across Canada that go unreported,” says Oyibo. “Apart from personal and administrative barriers, such as fear of consequences and lack of confidentiality in handling reports, the culture of little-to-no reporting reflects system design. We want to leverage persuasive techniques to enhance these systems and make them more motivating and valuable, to encourage users to report as many incidents and near misses as possible so that the community can learn from them. This will go a long way in fostering patient safety in community pharmacies across Canada.”

Oyibo’s work is part of a global effort to bridge the digital divide in health care and utilize technology to improve the lives of diverse populations.

York-developed safe water innovation earns international praise

Child drinking water from outdoor tap water well

The Safe Water Optimization Tool (SWOT), an innovative technology used to help humanitarian responders deliver safe water in crisis zones, developed by two professors in York University’s Lassonde School of Engineering and Dahdaleh Institute for Global Health Research, was recently highlighted as a success story in two international publications.

Syed Imran Ali

Built by Syed Imran Ali, an adjunct professor at Lassonde and research Fellow at the Dahdaleh Institute, in collaboration with Lassonde Associate Professor Usman Khan, the web-based SWOT platform generates site-specific and evidence-based water chlorination targets to ensure water remains safe to drink all the way to the point of consumption. It uses machine learning and process-based numerical modelling to generate life-preserving insight from the water quality monitoring data that is already routinely collected in refugee camps.

One of the SWOT’s funders, the U.K.-based ELRHA Humanitarian Innovation Fund, recently published a case study on the tool to serve as an example of a successful humanitarian innovation.

As a result of that publication, the SWOT was then highlighted as a success story in another case study, this time in the U.K. government’s latest white paper, titled “International development in a contested world: ending extreme poverty and tackling climate change.”

Water quality staff tests chlorination levels in household stored water at the Jamam refugee camp in South Sudan. Photo by Syed Imran Ali.

“These international recognitions highlight the impact our research is having on public health engineering in humanitarian operations around the world,” explained Ali.

As his team works to scale up the SWOT globally, he believes these publications will help increase awareness of and confidence in the technology. “We’re excited to build new partnerships with humanitarian organizations and help get safe water to the people who need it most,” he said.

For more information about the Safe Water Optimization Tool, visit safeh2o.app.

To learn more about how this innovation is advancing, read this YFile story.

Federal grant supports innovative project to improve Canadian digital health care

Medical,Healthcare,Research,And,Development,Concept.,Doctor,In,Hospital,Lab

A three-year grant totalling $500,000 will fund a collaborative project between York University Professor Maleknaz Nayebi and RxPx, a company that creates and supports digital health solutions.

Maleknaz Nayebi

Naybei is a professor in the Lassonde School of Engineering in the Department of Electrical Engineering & Computer Science and a member of the Centre for Artificial Intelligence & Society (CAIS). CAIS unites researchers who are collectively advancing the state of the art in the theory and practice of artificial intelligence (AI) systems, governance and policy. The research includes a focus on AI systems addressing societal priorities in health care.

The funding, awarded by the Natural Sciences & Engineering Research Council of Canada’s Alliance Grant program, will support the development of the Digital Health Defragmenter Hub (DH2).

Alliance Grants support university researchers collaborating with partner organizations to “generate new knowledge and accelerate the application of research results to create benefits for Canadians.”

This collaborative project aims to address the intricate challenges within the Canadian digital health-care landscape by integrating advanced software engineering principles with machine-learning algorithms.

The project’s goal is to develop a software platform dedicated to digital health services. Currently, digital health services are designed and offered in isolation from other social, economic or health services, says Nayebi, adding that this results in inharmonious digital health care where many services overlap, while many pain points and requirements remain untacked.

“Lack of co-ordination among providers, the inability of patients to choose services and make open decisions, the rigidity of the market toward digital innovations and isolation of providers are known as the main barriers in the Canadian digital health-care ecosystem,” says Nayebi. “In this ecosystem, the physicians act as service-supply-side monopolists, exercising significantly more power than their demand-side patients. A survey conducted by Price Waterhouse Cooper showed the unpreparedness of the ecosystem, where only 40 per cent could envision a collaboration with other organizations. This further leads to increased inequality within the health-care system. In contrast, 62 per cent of American-based active health-care organizations had a digital health component in their strategic plan.”

DH2 is a platform that brings together open innovation in health care, allowing health-care providers to deliver personalized services to the public. The project is aimed to provide software and AI-based technology that makes digital health services more affordable and accessible to a broader population, integrates innovative business strategies for new entrants or low-end consumers, and creates a value network where all stakeholders benefit from the proliferation of innovative technologies.

“DH2 serves as a marketplace where not only can individuals with basic health-care services contribute, but it also features AI-driven matchmaking services, connecting patients with the specific demands of health-care providers and caregivers,” says Nayebi.

In this capacity, DH2 addresses the defragmentation in the wellness and health ecosystem by enabling users and user communities.

“DH2 goes beyond just connecting people; it also uses machine learning to help patients make informed decisions about their digital health-care options. Such platforms can act as the governing and strategic solution for leading market and innovation, and provide faster time to market by assisting providers in their deployment, distribution and monetization processes. They provide even access to information for all parties and effectively reduce inequalities.”

In addition, platforms add to the geographic diversity of participants. Moreover, says Nayebi, the platform enhances the diversity of participants across different geographic locations, establishing an ecosystem that enables quicker responses to disruptive events such as the COVID-19 pandemic.

Faculty of Liberal Arts & Professional Studies sheds light on new projects, global opportunities

Header banner for INNOVATUS

In this issue of Innovatus, you will read stories about how the Faculty of Liberal Arts & Professional Studies (LA&PS) is responding to the needs of our students with innovative new projects and programs to help them succeed in a rapidly changing world.

Dean J.J. McMurty
Faculty of Liberal Arts & Professional Studies Dean J.J. McMurty.

One such program is our 12 U Math waiver pilot class. After the COVID-19 lockdowns, it became clear that some students needed to catch up in math fundamentals. This prompted the development of the pilot class to help address the numeracy shortfall experienced by many incoming LA&PS students.   

We also know that students want paid work experience in opportunities related to their field of study; this is one of the reasons paid co-op placements will replace internships and be available for all LA&PS programs starting September 2024.  

And now, more than ever, we know global leaders need a global perspective. We’ve reactivated our fleet of summer abroad opportunities, offering seven study abroad courses in 2024.  

Finally, educators across universities are all grappling with artificial intelligence (AI). Learn more in this issue about how we are dealing with both the drawbacks and benefits of AI. 

Thank you to our entire LA&PS community for all the work you have put into making our teaching and pedagogy so great.  

I hope you enjoy learning more about some of the ways we are helping our staff, students and faculty.  

J.J. McMurtry
Dean, Faculty of Liberal Arts & Professional Studies 

Faculty, course directors and staff are invited to share their experiences in teaching, learning, internationalization and the student experience through the Innovatus story form, which is available at tl.apps01.yorku.ca/machform/view.php?id=16573.


In this issue:

LA&PS study abroad program evolves, expands its offerings
Students in LA&PS have opportunities – at home and abroad – to engage in global citizenship and learning.

Summer course opens door for students missing numeracy skills
A pilot program created to close the gap on math skills is adding up to success for students in LA&PS.

LA&PS opens conversation about academic honesty and artificial intelligence
A recent event to educate students about generative artificial intelligence, and the University’s policies, sparked meaningful discussions about the changing landscape of education.

It’s co-op programs, not internships, for liberal arts and professional studies students
The introduction of an optional paid co-op program will allow students to participate in work-integrated learning earlier in the educational journey.

LA&PS opens conversation about academic honesty and artificial intelligence 

AI


By Elaine Smith 

With generative artificial intelligence (AI) top of mind for many members of the York community these days, the Faculty of Liberal Arts & Professional Studies (LA&PS) decided that Academic Honesty/Integrity Month was a perfect opportunity to discuss the topic with students. 

LA&PS held a tabling event at Vari Hall on Oct. 24 to educate students about generative AI, address the current parameters for using it in courses and build digital literacy around these emerging tools. They also posed scenarios involving AI so students could consider what is appropriate in various contexts. Approximately 150 students stopped to talk with faculty and staff on hand. 

“We’re really thinking about being proactive and connecting with students around academic honesty and AI in more engaging ways,” said Mary Chaktsiris, a historian and associate director of teaching innovation and academic honesty for LA&PS. “We hope that as a result of this event, students will reach out to instructors to talk about generative AI and connect with available supports at York.”

Students at Vari Hall learn more about academic integrity in the context of artificial intelligence.
Students at at tabling event in Vari Hall learn more about academic integrity in the context of artificial intelligence.

Chaktsiris and the LA&PS academic honesty team co-led the Vari Hall event with Stevie Bell, head of McLaughlin College and an associate professor with the Writing Department. They had support from Michelle Smith, a learning innovation specialist, and academic honesty co-ordinators Namki Kang and Angelica McManus. Neil Buckley, associate dean of teaching and learning, and knowledgeable representatives from the Writing Centre, Peer-Assisted Study Sessions (PASS) and Student Numeracy Assistance Centre (SNACK) instructional teams were also on hand to converse with students. 

“We wanted students to get the facts about academic honesty and give them some guidance regarding AI now that the York Senate’s Academic Standards, Curriculum and Pedagogy (ASCP) Committee has given a policy clarification,” said Buckley. “It was an opportunity to inform students about this, because every student experiences AI in different contexts, and this is a domain that will be growing and growing.” 

ASCP states that “Students across York are not authorized to use text-, image-, code- or video-generating tools when completing their academic work unless explicitly permitted by a specific instructor in a particular course.” As part of a regular review process, a newly revised Senate Policy on Academic Honesty is expected to be announced in coming months. 

Bell noted, “In my experience with academic honesty since I began teaching writing in 2002, I’ve never found a student who wanted to cheat; they want to find out how to do things correctly. 

“So, we brought the conversation to Vari Hall. We wanted this event to be an inviting space for students to discuss AI openly, because the landscape is shifting. In some courses, professors suggest that students use it to do specific tasks, while in other courses, it’s a no-go zone. We wanted students to know how to talk to their professors about it. From talking to students in the Writing Department, I know they are very confused about if, when and how to use AI, so this was very generative for all.”

Students at a tabling event in Vari Hall.
Students at a tabling event in Vari Hall.

Students had a variety of concerns to share at Vari Hall. Some wanted to talk specifically about academic honesty, but others wanted to discuss generative AI more specifically. Faculty, too, are exploring AI, Buckley noted. For example, the Teaching Commons has a community of practice dedicated to discussing AI and how it is being used across campus and recently held a Summit on Generative AI in Higher Education. With the use of AI expected to grow exponentially in the workplace, understanding how to use generative AI will be essential. 

“AI is already a tool in the workplace,” Bell said. “If you look at job postings on the Indeed site, for example, many of them request experience in using generative AI technology productively. As a result, in the Writing Centre, we’re looking at building digital literacies. Students need to understand generative AI’s incentives and motivations to tell you what you want to hear, and they need to learn to fact check. 

“The questions can become very nuanced. For instance, are you giving away a company’s proprietary information if you use it?” 

The success of the Vari Hall event inspired the LA&PS team and they would like to see the conversation continue. Bell has begun holding ongoing workshops at the Writing Centre with a student focus; the first one drew 75 people, including teaching assistants.  

“From a pedagogical perspective, connection and conversation are important parts of navigating the emergent aspects of AI,” Chaktsiris said. “More connections with students will be important to building digital literacies and helping navigate the shifting contexts of generative AI. A focus on connection and support also leans into more inclusive pedagogical practice. I hope there are more touch points for us to discuss AI and academic honesty more generally.” 

Students who have questions can turn to available LA&PS resources such as the Writing Centre, PASS, SNACK, peer mentors, academic advising and academic honesty co-ordinators to discuss generative AI and academic honesty in more detail.

York Circle Lecture Series presents experts on topical subjects

York Circle Lecture series

In collaboration with Jennifer Steeves, the York Circle Chair and associate vice-president research, the Office of Alumni Engagement invites the community to York University’s Keele campus for a new instalment of the York Circle Lecture series.

Beginning Nov. 25 from 9 a.m. to 1 p.m. at the Life Sciences Building, prominent faculty members will delve into a diverse array of compelling subjects, reflecting the defining themes of York University.

The York Circle Lecture Series is held four times a year and is open to York’s community, including alumni and friends. Tickets are $5 and include coffee, light snacks and lunch.

Sessions will feature the guest speakers, and attendees will be asked to select one lecture from each session during registration.

10 a.m. sessions

Maxim Voronov
Maxim Voronov

Maxim Voronov, professor, organizational behaviour and industrial relations, Schulich School of Business, presenting “The good, the bad, and the ugly of authenticity.”

Authenticity seems ever-present in today’s society, and it has become an important research topic among organizational scholars. Much of the time, both scholars and practitioners see authenticity as unambiguously good. But we need to acknowledge the darker side of authenticity and explore its implications. The purpose of this talk is to explore “the good, the bad and the ugly” of authenticity, shifting the focus away from authenticity as an attribute of people and things and toward unpacking the process by which people and things are cast as authentic. A particular focus will be on unpacking the contribution of authenticity to both social good and social harm.

Emilie Roudier
Emilie Roudier

Emilie Roudier, assistant professor, School of Kinesiology & Health Science, Faculty of Health, presenting “Wildland fires: studying our blood vessels to better understand the impact on health.”

Over the past decade, the intensity and size of wildland fires have increased. Wildland fire seasons have lengthened, and these fires contribute to global air pollution. This presentation will highlight how wildland fire-related air pollution can impact our heart and blood vessels.

11:20 a.m. sessions

Usman Khan
Usman Khan

Usman Khan, associate professor and department Chair, Department of Civil Engineering, Lassonde School of Engineering, presenting “Harnessing the power of AI for flood forecasting.”

Floods are the most frequent weather-related natural disasters, affecting the largest number of people globally, with economic damages in excess of $900 billion (between 1994 and 2013). Globally, climate change and urbanization have led to an increase in floods in recent decades and this trend is projected to continue in the coming years, including in Canada. Despite this, Canada is the only G7 country without nationwide flood forecasting systems, which are key to saving lives and reducing the damages associated with floods. Hydroinformatics, the study of complex hydrological systems by combining water science, data science and computer science, attempts to improve traditional flood forecasting through the use of advanced techniques such as artificial intelligence (AI). This talk will outline recent research in this area and plans to build a Canada-wide, open-source, real-time, operational flood forecasting system that harnesses the power of AI to improves our ability to predict and prepare for floods.

Antony Chum
Antony Chum

Antony Chum, assistant professor, Canada Research Chair, School of Kinesiology & Health Science, Faculty of Health, presenting “The impact of recreational cannabis legalization on cannabis-related acute care in Ontario.”

This presentation will discuss the effects of cannabis legalization on cannabis-related acute care (emergency department visits and hospitalizations). The research conducted discovered specific impact patterns among different demographic groups. Additionally, the talk will delve into regional disparities and analyze the policy implications arising from the legalization process.

Since 2009, York Circle has showcased the ideas and research being generated by York University’s community. Topics come from every Faculty and have included discussions around gender issues, brain function, mental health, international aid, sports injuries, financial policy and many more evolving subjects.

New funds aid in AI methods to advance autism research

ai_brain

Professor Kohitij Kar, from York University’s Department of Biology in the Faculty of Science, is among 28 early-career researchers who received grants valued at $100,000 from Brain Canada’s Future Leaders in Canadian Brain Research program. His project will combine neuroscience and artificial intelligence (AI) studies of vision into autism research.

Kohitij Kar

Kar, a Canada Research Chair in Visual Neuroscience, combines machine learning and neuroscience to better understand visual intelligence. His new project funded by Brain Canada will explore these intersections in the context of autism.

“The ability to recognize other people’s moods, emotions and intent from their facial expressions differs in autistic children and adults,” says Kar. “Our project will introduce a new, vastly unexplored direction of combining AI models of vision into autism research – which could be used to inform cognitive therapies and other approaches to better nurture autistic individuals.”

Based on prior funding from the Simons Foundation Autism Research Initiative, Kar’s research team at York University has been developing a non-human primate model of facial emotion recognition in autism. The machine learning-based models the team will use are called artificial neural networks (ANNs), which mimic the way the brain operates and processes information. Kar will develop models that predict at an image-by-image level how primates represent facial emotions across different parts of their brain and how such representations are linked to their performance in facial emotion judgment tasks. They will then use state-of-the-art methods developed by their team to fine-tune the ANNs to align them more with the performance of neurotypical brains and those of an autistic adult.

The second part of Kar’s project will focus on using the updated ANNs to reverse-engineer images that could potentially be used to help autistic adults match their facial emotion judgments to that of the neurotypically developed adults. This work builds on his previous research (published in the journal Science) that showed ANNs can be used to construct images that broadly activate large populations of neurons or selectively activate one population while keeping the others unchanged, to achieve a desired effect on the visual cortex. In this project, he will shift the target objective from neurons to a clinically relevant behaviour.

Brain Canada’s Future Leaders in Canadian Brain Research program aims to accelerate novel and transformative research that will change the understanding of nervous system function and dysfunction and their impact on health. It has been made possible by the Canada Brain Research Fund, an arrangement between the Government of Canada (through Health Canada) Brain Canada Foundation and the Azrieli Foundation, with support from the Erika Legacy Foundation, the Arrell Family Foundation, the Segal Foundation and the Canadian Institutes of Health Research.

Professor receives patent to improve AI machine learning

AI

Steven Xiaogang Wang, a professor in York University’s Department of Mathematics & Statistics at the Faculty of Science, and a member of the Laboratory of Mathematical Parallel Systems, has had a U.S. patent approved for an algorithm that will reduce the training time of artificial intelligence (AI) machine learning (ML).

The patent, titled “Parallel Residual Neural Network Architecture and System and Method for Training a Residual Neural Network,” was inspired by a 2018 paper titled “Decoupling the Layers in Residual Network.” Both were based on collaborations with Ricky Fok, a former postdoctoral Fellow student; Aijun An, a professor in the Department of Engineering & Computer Science; and Zana Rashidi, a former graduate research assistant who carried out some of the computing experiments.

steven_wang
Steven Wang

The now-patented algorithm, approved this year, was a result of six months of research at York. It was submitted to the United States Patent and Trademark Office in 2019. The algorithm’s framework is based on mathematical arguments that helps significantly reduce the training time of machine learning, as it absorbs, processes and analyzes new information. It does so by using a mathematical formula to allow residual networks – responsible for the training of AI – to compute in parallel to each other, thereby enabling faster simultaneous learning.

Wang’s desire to accelerate machine learning’s abilities is driven, in part, by a specific area of AI applications. “I want to apply all the algorithms I develop to health care,” Wang says. “This is my dream and mission.”

Wang has especially focused on using AI to improve care for seniors and that work has previously earned him the Queen Elizabeth II Platinum Jubilee Award from the House of Commons for initiatives during COVID-19 to mitigate the spread of the virus in long-term care facilities.

Wang plans to use the patented algorithm in ongoing projects that aim to provide smart monitoring of biological signals for seniors. For example, it could be used in long-term care to continuously monitor electrocardiogram signals at night to register heartbeats that have stopped. To move towards that goal, Wang is also working on building an AI platform that will complement those ambitions, and expects it to be ready in several years.

He is deeply invested in the social impact of AI as a member of the York organized research unit Centre for Artificial Intelligence & Society, where researchers at York who are collectively advancing the state of the art in the theory and practice of AI systems, governance and public policy. 

“I can use the machine learning to help the long-term care facilities improve the quality of care, but also help out with the struggles of the Canadian health-care system,” says Wang.