
Examples include generalizability, causal reasoning, higher/meta-level cognition, multi-agent systems, agent cognition, the ability to generate new knowledge, novel scientific conjectures/theories, novel beneficial capabilities, and novel compute architectures, breakthroughs in AI’s use of resources.
Examples include bias and fairness, toxicity of outputs, factuality/accuracy, information hazards including misinformation, reliability, security, privacy and data integrity, misapplication, intelligibility, and explainability, social and psychological harms.
Examples include risks associated with tool-use/connections to physical systems, multi-agent systems, goal misspecification/drift/corruption, risks of self-improving/self-rewriting systems, gain of function risks and catastrophic risks, alignment, provably beneficial systems, human-machine cooperation, challenges of normativity and plasticity.

Examples include the fields of health and life sciences, climate and sustainability, human well-being, in the foundational sciences (including social sciences) and mathematics, space exploration, scientific discoveries, pressing societal challenges (e.g., the Sustainable Development Goals), etc.
Examples include new modes of abundance, scarcity and resource use, economic inclusion, future of work, IP and content creation, responsible business models, network effects and competition, and with a particular eye towards countries, organizations, communities, and people who are not leading the development or direct use of AI.
Examples include access to research and resources for AI development, AI ecosystem participation diversity, equitable access to capabilities and benefits, and disciplinary diversity in development of AI.

Examples include publication, responsible open-source approaches, distributions and access to tools and datasets, testing/learning/iterating approaches, domain-relevant approaches and responsible use and resource consumption.
Examples include cyber-security of AI systems, governance of frontier/most capable systems, approaches to govern misuse by different types of actors, governance of autonomous weapons, avoiding AI development/deployment race conditions at the expense of safety, protocols and verifiable AI treaties, and stably governing the emergence of AGI.

Examples include understanding of AI by leaders in policy, regulation, deployment, and adaptation of socio-political systems, civic and governance institutions and infrastructure, education and other human capabilities and systems to enable human and societal flourishing alongside increasingly capable AI.
Examples include humanistic ethics alongside powerful AI, a world without economic striving, human exceptionalism, meaning and purpose.