By William Crooks
Local Journalism Initiative
The Quebec government has unveiled new tools to guide colleges and universities in the responsible integration of artificial intelligence (AI), marking a significant step in the province’s effort to adapt higher education to rapid technological change. While the announcement underscores growing attention to AI’s role in teaching and learning, some experts remain sceptical about the technology’s trajectory and its long-term impact.
On Aug. 18, Minister of Higher Education Pascale Déry introduced two key documents: Déploiement et intégration de l’intelligence artificielle en enseignement supérieur – Cadre de référence and Intégration responsable de l’intelligence artificielle dans les établissements d’enseignement supérieur : repères et bonnes pratiques – Guide pratique 2025. Together, these publications lay out principles, governance models, and practical examples for post-secondary institutions to adopt AI ethically and effectively.
“AI is now part of the higher education landscape and we must adapt and harness its potential,” Déry stated in a release. “It will be essential to focus on developing students’ digital skills so they understand both the potential and the limitations of these tools.” Parliamentary assistant Mario Asselin echoed her optimism, calling the guidelines a sign of “mobilization around issues raised by artificial intelligence.”
The government also plans further measures this fall, including a repository of best practices, a toolkit of real-world AI applications, and a directory of AI-related training programs tailored to different levels of education.
Sherbrooke emphasizes ethical integration
The Université de Sherbrooke (UdeS), which formed an expert committee in 2023 to address AI’s impact on its programs, welcomed the provincial framework as complementary to its own efforts. In an official response, UdeS Vice-Rector for Studies and Student Life Isabelle Dionne stressed that AI should “support human pedagogical mediation” and that integration must aim for “pedagogical effectiveness, not economic efficiency.”
The institution’s guiding principles include promoting ethical use, reinforcing students’ technological resilience, and rethinking the instructor’s role from “master to guide.” UdeS also emphasized the need to develop high-level competencies—critical thinking, analysis, and creativity—while embedding ethical considerations of generative AI in curricula.
According to Dionne, UdeS sees an opportunity to “position itself as a leader in ethical and pedagogical integration of AI” and intends to collaborate with government bodies to shape future directions.
Expert voices caution on real capabilities
Despite the enthusiasm, some academics urge caution when considering what AI can realistically deliver. Stefan Bruda, a computer science professor at Bishop’s University, expressed doubts about both the hype surrounding AI and the necessity of formal guidelines for universities.
“I was kind of surprised that everybody talks about guidelines,” he said in an interview. “Personally, I don’t see much of a reason to have these guidelines. Every instructor in computer science knows what AI models can and cannot do and can make their own decisions.” While acknowledging that other disciplines may face different challenges, Bruda suggested that autonomy at the course level remains sufficient for his field.
Progress without breakthroughs
Reflecting on developments since 2023, Bruda described AI’s evolution as “a matter of quantity over quality.” While models have become more capable and user-friendly, their core design principles have not fundamentally changed.
“There haven’t been huge developments in terms of algorithms,” he explained. “But there have been tremendous advances in what AI can do, particularly in how models interact with people. Version 4 of ChatGPT, for example, was much better than version 3 in terms of sounding more human”.
However, persistent issues remain, notably “hallucinations”—incorrect or fabricated responses. “That hasn’t changed and I don’t think it’s going to get any better,” Bruda said, pointing to limitations in training data. “They have scraped the whole web. An AI model is only as good as its training data, and we’re reaching an apex. On top of that, the internet is now full of AI-generated content, which is a very bad thing for training models.”
Mixed performance across disciplines
Bruda noted that AI tools perform unevenly across academic tasks. “For first-year programming courses, ChatGPT can provide correct answers very easily,” he said. “It has gotten better and could competently go all the way to upper years in coding. But it continues to be very bad in applied math”.
This disparity complicates their use in education and highlights why some instructors exercise caution when incorporating AI into coursework.
Future potential—and hard limits
Asked whether AI might transform the labour market, Bruda was careful not to overstate its power. “These models are going to be part of our professional life,” he acknowledged, adding that some low-level white-collar jobs could be replaced. “Receptionists or roles that require interaction with the public but not much creativity—those are very much in danger.”
Beyond that, he remains sceptical. “We’re reaching a hard limit in terms of training data, and if we get to that limit, we get to the limit of what these models can do,” he said. “I don’t think they’re going to get substantially better than they are today.”
Bruda also questioned the economic sustainability of large-scale AI models. “There is a huge amount of energy and hardware behind those models. They haven’t gotten beyond cute little toys right now,” he remarked. “I don’t think these companies have made a profit so far, and I’m not sure they could in the future.”
A turning point—or a plateau?
While Quebec pushes forward with governance frameworks and institutions like UdeS plan for ethical integration, Bruda offered a sobering takeaway: “The current state of AI might be shaky. It is possible that we are seeing the apex of what these models can do.”
For now, universities will continue to navigate the tension between optimism and realism—balancing AI’s promise as an educational tool with the practical and ethical questions it raises.