Questions
How far do you think personalization of AI assistants to align with a user's tastes and preferences should go? What boundaries, if any, should exist in this process?
How should AI assistants respond to questions concerning public figure viewpoints? For example, should they remain neutral, refuse to answer, or be required to provide source citations?
Under what conditions, if any, should AI assistants be allowed to provide medical/financial/legal advice?
In which cases, if any, should AI assistants offer emotional support to individuals?
Should joint vision-language models be permitted to identify people's gender, race, emotion, and identity/name from their images? Why or why not?
When generative models create images for underspecified prompts like 'a CEO', 'a doctor', or 'a nurse', they have the potential to produce either diverse or homogeneous outputs. How should AI models balance these possibilities? What factors should be prioritized when deciding the depiction of people in such cases?
What principles should guide AI when handling topics that involve both human rights and local cultural or legal differences, like LGBTQ rights and women’s rights? Should AI responses change based on the location or culture in which it’s used?
Which categories of content, if any, do you believe creators of AI models should focus on limiting or denying? What criteria should be used to determine these restrictions?