Universal Guidelines for AI - Bios



Bios


Evelina Ayrapetyan, Communications Strategist

 

 Evelina Ayrapetyan is a researcher, communication strategist, and advocate for human rights and democracy at the intersection of emerging technologies in repressive environments. Evelina received her AI policy training at the Center for AI and Digital Policy where she currently serves as a policy group member. Evelina’s work is centered on safeguarding democracy abroad, especially in lesser-covered nations and she is committed to her work fighting state-sponsored disinformation on the Internet as it pertains to the training of generative AI platforms. Evelina is currently a participant in the Los Angeles FBI Citizens’ Academy where she is collaborating with community leaders to discuss and tackle national security challenges in the Los Angeles area. Additionally, Evelina is involved in the National Institute of Standards and Technology Generative AI Risk Management Framework working group as a volunteer, and she is currently mentoring three technologists building safer AI products through All Tech is Human. Evelina has worked with a variety of philanthropic organizations, coaching teams, fostering communities, and playing a pivotal role in the creation and implementation of economic development programs for various stakeholders. Evelina’s unique lens into her work has been shaped by living and traveling in countries like Armenia, Spain, Nigeria, Cuba, and the UAE. Evelina completed her MA in Public Policy from the Harris School of Public Policy at The University of Chicago in 2022, and currently resides in Los Angeles, California. 


Cornelia Evers, European Institute at the London School of Economics

 

Cornelia Evers is affiliated to the European Institute at the London School of Economics. She specialises in AI governance through a multidisciplinary approach, combining public policy, politics and Science and Technology Studies (STS). With a particular focus on ethical AI governance, she recently presented her thesis about the discourse on ethical AI between the European Commission and Big Tech companies and how the impending European AI Act repoliticises a widely captured debate.  Cornelia previously worked on AI research focusing on 'ghost work' and a multi-stakaholder project on AI and the future of work at University College London and the British Academy. Currently, she is affilated with the Office for Information and Communication Technologies at the UN, working on responsible AI practices. Enthusiastic about the five-year anniversary of the Universal Guidelines on AI by the Center for AI and Digital Policy, she looks forward to the conference to celebrate it.


Niharika Gujela, Hertie School in Berlin, Germany

 

Niharika is currently pursuing a Master's in Public Policy program at Hertie School in Berlin, Germany. Her professional background includes 5 years of consulting experience on digital governance projects, spanning both the public sector and the international development realm. She currently serves as a CAIDP Research Fellow and holds a bachelor's degree in Information Technology. Her research interests are focused on the governance of artificial intelligence, digital public goods/infrastructure, as well as open government initiatives.


Merve Hickok, CAIDP President and Research Director

 

Merve Hickok is the President and Senior Research Director of the Center for AI and Digital Policy, and the Founder of AIethicist.org. Her work intersects both AI ethics and AI policy and governance. She is focused on AI bias, social justice, DE&I, public benefit and participatory development and governance – as they translate into policies and practices. Merve is a Data Ethics Lecturer at University of Michigan, School of Information; Member of the Advisory Board of Turkish Policy Quarterly Journal; Member of the Founding Editorial Board at Springer Nature AI & Ethics journal; Advisor at The Civic Data Library of Context; Member at IEEE work groups on AI standard setting and Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS) alongside national institutions. She has been recognized by several organizations - most recently as one of the 100 Brilliant Women in AI Ethics™ 2021. Merve is also a member of the Board of Directors at Northern Nevada International Center. NNIC serves as a refugee resettlement agency to help displaced persons and victims of human trafficking, as well as organizing programs for international delegations through the U.S. Department of State and other federal agencies. NNIC is among the top organizations in the United States hosting over a dozen U.S. Department of State Bureau of Educational and Cultural Affairs large exchange programs.


Abhishek Gupta, Director for Responsible AI with the Boston Consulting Group

 

Abhishek Gupta is the Director for Responsible AI with the Boston Consulting Group (BCG) where he works with BCG's Chief AI Ethics Officer to advise clients and build end-to-end Responsible AI programs. He is also the Founder & Principal Researcher at the Montreal AI Ethics Institute, an international non-profit research institute with a mission to democratize AI ethics literacy. Through his work as the Chair of the Standards Working Group at the Green Software Foundation, he is leading the development of a Software Carbon Intensity standard towards the comparable and interoperable measurement of the environmental impacts of AI systems.

 

His work focuses on applied technical, policy, and organizational measures for building ethical, safe, and inclusive AI systems and organizations, specializing in the operationalization of Responsible AI and its deployments in organizations and assessing and mitigating the environmental impact of these systems. He has advised national governments, multilateral organizations, academic institutions, and corporations across the globe. His work on community building has been recognized by governments from across North America, Europe, Asia, and Oceania. He is a highly sought after speaker with talks at the United Nations, European Parliament, G7 AI Summit, TEDx, Harvard Business School, Kellogg School of Management, amongst others. His writing on Responsible AI has been featured by Wall Street Journal, Forbes, MIT Technology Review, Protocol, Fortune, VentureBeat, amongst others. 


Arik Karim, Student Political Research Initiative for New Governance (SPRING) Institute

 

Arik Karim is a debater, student journalist, researcher, and founder of the Student Political Research Initiative for New Governance (SPRING) Institute, a group tackling important topics in policy through the perspective of often underrepresented youth stakeholders. He is interested in exploring the convergence of international relations, philosophy, and public policy with a special interest in artificial intelligence and existential risk.


Lorraine Kisselburgh, CAIDP Chair

 

Dr Lorraine Kisselburgh Lorraine Kisselburgh (Ph.D., Purdue University) is the inaugural Chair of ACM’s global Technology Policy Council, where she oversees global technology policy engagement. Drawing on expertise of 100,000 members, ACM’s policy groups provide nonpartisan technical expertise to policy leaders, stakeholders, and the public. In 2018 she was a member of ACM’s Code of Ethics Task Force and a Scholar in Residence at the Electronic Privacy Information Center in Washington DC where she helped develop the University Guidelines for Artificial Intelligence.

 

At Purdue University, Dr. Kisselburgh is a lecturer, fellow in the Center for Research in Information Assurance and Security and former professor of media, technology, and society. Her research on the social and cultural implications of technologies--including privacy, ethics, gender equity, and collaboration--has been conducted in China, India, Europe, and the Middle East. She has published more than 50 articles with 11 top paper awards, and has been awarded more than $2 million in funding to support her research. Funded by the National Science Foundation, she developed a framework to enhance ethical reasoning skills of STEM researchers, and also studied collaborative systems for creative designers. Purdue recognized her as the inaugural Faculty Scholar in the Center for Leadership, a Diversity Faculty Fellow, and a Violet Haas Award recipient for her efforts on behalf of women.


Caroline Friedman Levy, Edna Bennett Pierce Prevention Research Center

 

Caroline Friedman Levy is a Research-to-Policy Collaboration Scholar at the Edna Bennett Pierce Prevention Research Center at Pennsylvania State University focused on applying behavioral science research to the implementation of evidence-based policies. She is a clinical psychologist, served as a Policy Fellow at the Department for Education in the UK and is currently a member of the CAIDP Policy Group.


Jens Meijen, University of Leuven

 

Jens Meijen works as lead consultant at Ulysses AI and as a doctoral researcher at the University of Leuven. He is also an Atlantic Dialogues Emerging Leader at the Policy Center for the New South and an IDEA Practitioner at NASA. He previously was a researcher & team leader at the Center for AI and Digital Policy, a Global Policy Fellow at the Institute for Technology & Society in Rio de Janeiro, and a Europaeum Scholar in the Oxford University-hosted Europaeum network. He has been working as a journalist and author for almost ten years and has published several books


Wonki MinHonorary President of SUNY 

 

Wonki Min is Honorary President of SUNY (State University of New York) Korea and former Ambassador for Science Technology & Innovation, Republic of Korea.  He served as Vice Minister at the Korean Ministry of Science and ICT. At the OECD, Mr. Min  chaired the AI Expert Group and the Committee on Digital Economy Policy (CDEP). He was the Chairman of the 2015 ITU Council and the 2014 ITU Plenipotentiary Conference.


Claudio Mutua, Sian Mutua Advocates

 

"Claudio Mutua is a Kenyan advocate of the law  with 5 years post-admission experience at Sian Mutua Advocates, Nairobi. His interests lie in the convergence between human rights and emerging areas such as privacy and AI. He is also currently an LLM student at the University of Nairobi working on a thesis on the right to genetic privacy in the era of the intersection between direct-to-consumer genetic testing and digitization of genetic records." 


Herman Podar, CoVisualize

 

Heramb is an effective altruist who is commited to maximizing social impact and combatting existential risks to humanity. He has co-founded an PhysicsGenie an EdTech platfrom to gamify STEM education - https://physicsgenie.org/. In the past, he has worked at Reach4Help to map critical aid resources and also serves as the Co-Executive Director of Policy for the People. Feel free to reach out to him if you are working on any project to help humanity at podar_hd@cy.iitr.ac.in !


Marc Rotenberg, CAIDP

Executive Director and Founder

 

Marc Rotenberg is Executive Director and Founder of the Center for AI and Digital Policy. He is a leading expert in data protection, open government, and AI policy. He has served on many international advisory panels, including the OECD AI Group of Experts. Marc helped draft the Universal Guidelines for AI, a widely endorsed human rights framework for the regulation of Artificial Intelligence. Marc is the author of several textbooks including the 2020 AI Policy Sourcebook and Privacy and Society (West Academic 2016). He teaches privacy law and the GDPR at Georgetown Law. Marc has spoken frequently before the US Congress, the European Parliament, the OECD, UNESCO, judicial conferences, and international organizations. Marc has directed international comparative law studies on Privacy and Human Rights, Cryptography and Liberty, and Artificial Intelligence and Democratic Values. Marc is a graduate of Harvard College, Stanford Law School, and Georgetown Law. 


Dr. Grace Thomson, CAIDP Teaching Fellow

 

Dr. Thomson is the Academic Director at CERT, the corporate arm of the Higher Colleges of Technology (HCT). She leads expert teams in the conceptualization and operation of two academies: AI Academy and the VAT Academy. The AI Academy, created in 2019, is an initiative to empower UAE talent and bridge the gap in AI talent and AI knowledge, through national programs in cognitive technology skills for ethical AI adoption. 

CERT is a member of the UAE AI Network, an initiative of the Ministry of Artificial Intelligence to increase AI adoption with a human-centered approach. Dr. Thomson’s solid ability to engage internal and external stakeholders in strategic triple-helix roadmaps is the driver behind the plans of CERT to develop occupational standards in Data Science, Data Analytics, Software Engineering, AI, Cybersecurity, and Blockchain for the workforce of the future. She leads strategies for the creation of Talent Acceleration models that, relying on principles of human development and technology development, ethically use ML models and analytics in career pathway formulation and employability. In cooperation with CERT’s partners, she leads the creation of research ecosystem platforms to connect faculty-student researchers to applied projects with industry.


Brian Zhou, United States Naval Research Laborator

 

Brian Zhou is a student researcher at the United States Naval Research Laboratory and the founder of the Student Political Research Initiative for New Governance (SPRING) Institute, a group tackling important topics in policy through the perspective of often underrepresented youth stakeholders. Brian serves on Encode Justice's AI Advisory Board and Fidutam's Emerging Technologies Youth Board, and is interested in the intersections of AI and the human domain. His research has been published in the 37th AAAI and 40th ICML conferences.