AI Ethics and Governance Lab

Sections
Text Area

About the Lab

The AI Ethics and Governance Lab at the Centre for Artificial Intelligence Research (CAiRE), Hong Kong University of Science and Technology (HKUST) works at the cutting edge of science, technology, ethics, and public policy. Our mission is to advance the ethical research, development, and implementation of AI in a manner that is respectful of the diverse range of human values across cultures. This includes encouraging responsible AI practices that are safe, transparent, and explainable, while also promoting the benefits of AI for society as a whole. We are dedicated to offering theoretical insights and practical guidance to decision-makers in both the private and public sectors. By engaging with a broad array of stakeholders, including industry leaders, policymakers, academics, and the general public, we will produce knowledge and policy recommendations nuanced in their understanding of cultural, ethical, and technological complexities.

Text Area

Our Research Methodology

Our Lab operates at the intersection of multiple fields, combining computer science, psychology, philosophy, public policy, and social sciences to create a multidimensional view of the opportunities and risks presented by AI. Our research methodology is comprehensive and multidisciplinary, blending empirical analysis, conceptual inquiry, and policy-oriented studies. Central to our approach is the recognition of the importance of a comparative East-West perspective in addressing the ethical and governance challenges of AI. We strive to break down barriers to create a vibrant space where different intellectual approaches can enrich one another.

 

Our Challenges

The rapid pace of AI development and deployment necessitates immediate attention to a range of ethical and governance challenges. These challenges our lab is currently focusing on include:

  1. How can we design AI systems that reduce rather than amplify societal biases and discrimination?
  2. What methods can we use to improve the safety, reliability, and usability of AI systems?
  3. How can AI tools be effectively and ethically used in society, including in critical systems as well as in creative arts?
  4. How can we use technology and policy to foster sustainability innovation?
  5. How can the understanding of differing East-West intellectual traditions influence the design of globally applicable ethical and governance frameworks for AI?
Text Area

AI Ethics and Governance Lab Event Calendar

 

Text Area
Text Area

Team members (in alphabetical order)

 

Co-Directors

Left Column
Image
Image
Janet Hui-Wen Hsiao
Image Caption
Professor Janet Hui-Wen Hsiao
Right Column
Text Area

Janet H. Hsiao is a Professor at the Division of Social Science at HKUST. Her research interests include cognitive science, explainable AI, computational modeling, theory of mind, visual cognition, and psycholinguistics.

Personal Website

Left Column
Image
Image
Associate Professor Masaru YARIME
Image Caption
Associate Professor Masaru Yarime
Right Column
Text Area
Masaru Yarime is an Associate Professor at the Division of Public Policy and the Division of Environment and Sustainability at HKUST. His research interests focus on emerging technologies including artificial intelligence, the internet of things, blockchain, and smart cities, and their implications for public policy and governance.
 

Personal Website

Media Mentions

Text Area

Core members

Left Column
Image
Image
Hao Chen
Image Caption
Dr. Hao Chen
Right Column
Text Area

Hao Chen is an Assistant Professor at the Department of Computer Science and Engineering and Department of Chemical and Biological Engineering. He leads the Smart Lab focusing on developing trustworthy AI for healthcare. He has 100+ publications (Google Scholar Citations 24K+, h-index 63) in MICCAI, IEEE-TMI, MIA, CVPR, AAAI, Nature Communications, Radiology, Lancet Digital Health, Nature Machine Intelligence, JAMA, etc. He also has rich industrial research experience (e.g., Siemens), and holds a dozen of patents in AI and medical image analysis.

 

Left Column
Image
Image
Linus Huang
Image Caption
Dr. Linus Huang
Right Column
Text Area

Linus Huang is a Research Assistant Professor at the Division of Humanities. Trained as a philosopher of science, his research focuses on algorithmic bias, explainable AI, value alignment, ethics of emerging technology, as well as theoretical issues in computational cognitive neuroscience.

Personal Website

 

Left Column
Image
Image
Chengzhong Liu
Image Caption
Chengzhong Liu
Right Column
Text Area
Chengzhong Liu is currently a final year PhD student at CSE department, focusing on Human-computer Interaction (HCI). His research focus is the design and governance of generative AI applications, e.g., how to design AI applications with proper ethical considerations.
 
Left Column
Image
Image
Yang Liu
Image Caption
Dr. Yang Liu
Right Column
Text Area
Yang Liu is a Research Assistant Professor of Humanities and a Fellow of the Institute of Advanced Study at HKUST. He is also a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence and the Faculty of Philosophy at the University of Cambridge. His research interests include Philosophy of AI, Logic and the Foundations of Decision and Probability theory.
 
Left Column
Image
Image
Kira Matus
Image Caption
Professor Kira Matus
Right Column
Text Area

Kira Matus is a founding member of the Lab, Professor in the Division of Public Policy and the Division of Environment and Sustainability, and an Associate Dean in the Academy of Interdisciplinary Studies at the Hong Kong University of Science and Technology. She is a scholar of public policy, innovation, and regulation, with a particular interest in the use of policy to incentivize, and to regulate, emerging technologies. She has an especial interest in the roles of non-state/private governance institutions, such as certification systems, as well as the sustainability implications of new technologies.

Left Column
Image
Image
Papyshev Gleb
Image Caption
Dr. Gleb Papyshev
Right Column
Text Area

Gleb Papyshev is a Research Assistant Professor at the Division of Social Science at HKUST. His research covers the areas of AI policy and regulation, AI ethics, and corporate governance mechanisms for emerging technologies.

Personal Website

 

Left Column
Image
Image
Hailong Qin
Image Caption
Dr. Hailong Qin
Right Column
Text Area

Hailong Qin is a Post-Doctoral Fellow of Social Science. He has PhD degree in Computer Science from Harbin Institute of Technology. And he has five years of experience as an algorithm engineer in Internet Companies. His research interests include the history of Chinese artificial intelligence development, natural language processing, and social network analysis.

 

Text Area

Affiliated members

Left Column
Image
Image
Kelle S. Tsai
Image Caption
Professor Kellee S. Tsai
Right Column
Text Area

Kellee S. Tsai is the founding Director of the Lab. She is currently the Dean of Social Science and Humanities at Northeastern University. Trained as a political scientist, her areas of expertise include comparative politics, political economy of China and India, and informal institutions.

Media Mentions

 

Text Area

Resources

Text Area

Publications

  • Hsiao, J. H., & Chan, A. B. (2023). Towards the next generation explainable AI that promotes AI-human mutual understanding. NeurIPS XAIA 2023. https://openreview.net/forum?id=d7FsEtYjvN
  • Veale, Michael, Kira Matus, and Robert Gorwa. 2023. Global Governance of Machine Learning Algorithms. Annual Review of Law and Social Science, 19. https://www.annualreviews.org/doi/abs/10.1146/annurev-lawsocsci-020223-040749
  • Liu, Yang et al. 2023. “The Meanings of AI : A Cross-Cultural Comparison.” In Cave, S., K. Dihal (ed.), Imagining AI — How the World Sees Intelligent Machines. Oxford University Press, pp. 16 – 39.
  • Matthew Stephenson, Iza Lejarraga, Kira Matus, Yacob Mulugetta, Masaru Yarime, and James Zhan. 2023. “AI as a SusTech Solution: Enabling AI and Other 4IR Technologies to Drive Sustainable Development through Value Chains.” In Francesca Mazzi and Luciano Floridi, eds., The Ethics of Artificial Intelligence for the Sustainable Development Goals, Springer Nature, 183-201. https://link.springer.com/chapter/10.1007/978-3-031-21147-8_11
  • Aoki, Naomi, Melvin Tay, and Masaru Yarime. 2024. “Trustworthy Public-Sector AI: Research Progress and Future Agendas,” in Yannis Charalabidis, Rony Medaglia, and Colin van Noordt, eds., Research Handbook on Public Management and Artificial Intelligence, Edward Elgar, 260-273. https://www.e-elgar.com/shop/gbp/research-handbook-on-public-management-and-artificial-intelligence-9781802207330.html
  • Xie, Siqi, Ning Luo, and Masaru Yarime. 2023. “Data Governance for Smart Cities in China: The Case of Shenzhen,” Policy Design and Practice. DOI: 10.1080/25741292.2023.2297445. https://www.tandfonline.com/doi/full/10.1080/25741292.2023.2297445
  • Papyshev, Gleb, and Masaru Yarime. 2023. “The Challenges of Industry Self-Regulation of AI in Emerging Economies: Implications of the Case of Russia for Public Policy and Institutional Development,” in Mark Findlay, Ong Li Min, and Zhang Wenxi, eds., Elgar Companion to Regulating AI and Big Data in Emerging Economies, Edward Elgar, 81-98. https://www.e-elgar.com/shop/gbp/elgar-companion-to-regulating-ai-and-big-data-in-emergent-economies-9781785362392.html
  • Li, Zhizhao, Yuqing Guo, Masaru Yarime, and Xun Wu. 2023. “Policy Designs for Adaptive Governance of Disruptive Technologies: The Case of Facial Recognition Technology (FRT) in China,” Policy Design and Practice, 6 (1), 27-40. https://www.tandfonline.com/doi/full/10.1080/25741292.2022.2162248
  • Matthew Stephenson, Iza Lejarraga, Kira Matus, Yacob Mulugetta, Masaru Yarime, and James Zhan. 2023. “AI as a SusTech Solution: Enabling AI and Other 4IR Technologies to Drive Sustainable Development through Value Chains.” In Francesca Mazzi and Luciano Floridi, eds., The Ethics of Artificial Intelligence for the Sustainable Development Goals, Springer Nature, 183-201. https://link.springer.com/chapter/10.1007/978-3-031-21147-8_11
  • Éigeartaigh, Seán Ó, Jess Whittlestone, Yang Liu, Yi Zeng, Zhe Liu. 2020. “Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance.” Philosophy and Technology 33: 571–593.
  • Papyshev, Gleb, and Masaru Yarime. 2022. “The Limitation of Ethics-based Approaches to Regulating Artificial Intelligence: Regulatory Gifting in the Context of Russia.” AI & SOCIETY. https://doi.org/10.1007/s00146-022-01611-y
  • Thu, Moe Kyaw, Shotaro Beppu, Masaru Yarime, and Sotaro Shibayama. 2022. "Role of Machine and Organizational Structure in Science." PLoS ONE, 17 (8), e0272280 (2022). https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0272280
  • Chan, Keith Jin Deng, Gleb Papyshev, and Masaru Yarime. 2022. "Balancing the Tradeoff between Regulation and Innovation for Artificial Intelligence: An Analysis of Top-down Command and Control and Bottom-up Self-Regulatory Approaches," SSRN, October 19. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4223016#
  • Provided input to The Presidio Recommendations on Responsible Generative AI. 2023. Based on Responsible AI Leadership: A Global Summit on Generative AI, World Economic Forum in collaboration with AI Commons, June. https://www3.weforum.org/docs/WEF_Presidio_Recommendations_on_Responsible_Generative_AI_2023.pdf
  • Matus, Kira and Veale, Michael. 2022. The use of certification to regulate the social impacts of machine learning: Lessons from sustainability certification. Regulation & Governance, 16:177-196. https://doi.org/10.1111/rego.12417
  • Sivarudran Pillai, V. and Matus, KJM. 2021. Towards a responsible integration of artificial intelligence technology in the construction sector. Science and Public Policy, 47 (5), 689-704. https://doi.org/10.1093/scipol/scaa073
  • Stepehnson, Matthew., Lejarrage, Iza., Matus, Kira., Mulugetta, Yacob., Yarime, Masaru., Zhan, James. 2021. SusTech: enabling new technologies to drive sustainable development through value chains. G20 Insights: TF4-Digital Transformation, 2021 https://www.t20italy.org/wp-content/uploads/2021/09/TF4-PB8_final.pdf
  • Lin, Y.-T., Hung, T.-W., & Huang, L. T.-L. (2020). Engineering Equity: How AI Can Help Reduce the Harm of Implicit Bias. Philosophy and Technology, 34(1).
  • Huang, L. T.-L., Chen, H.-Y., Lin, Y.-T., Huang, T.-R., & Hung, T.-W. (2022). Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.