We're building a community to explore this question together. Currently in formation.
Join Our Founding CommunityWe enable humans to build on the successes of those that came before them, and to find solutions to ensure our descendants inherit a world that has every bit of the potential we have had ourselves.
While we recognise the vast potential that AI brings to the human and non-human centred world, we also understand that 4.6 billion years of evolution has brought us to our current place. Life is infinitely precious. The speed of current AI development brings into question our shared human future. We advocate for deep thinking about the direction and pace of that development.
AI does not need to become rogue or suddenly all-powerful to pose an existential threat. The more likely danger is quieter, and harder to resist.
As machines become more competitive across economic life, governance, and culture, human participation in those systems becomes less necessary. And it is precisely that participation -- our indispensability -- that has historically kept institutions oriented toward human flourishing.
Remove it, and the feedback loops that made civilisation work for people begin to break down. Economic power, state power, and cultural influence can each reinforce this drift in the others, accelerating a process that no single actor chose and that no single actor can easily stop.
No one yet has a concrete, plausible plan for stopping gradual human disempowerment. We believe building one is among the most important tasks of our time.
"The alignment of societal systems with human interests has been stable only because of the necessity of human participation. Once this participation gets displaced by more competitive machine alternatives, our institutions' incentives will be untethered from a need to ensure human flourishing."
From Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development, Kulveit et al. (2025)
How do we preserve meaningful human participation in economies, governance, and culture as AI becomes more capable?
What governance structures can protect future generations from a slow erosion of human agency?
How do we build institutions that remain oriented toward human flourishing even as human labour and cognition become less economically necessary?
What does it mean to steward this transition as a good ancestor would?
How do we ensure AI development honours billions of years of evolution and the full depth of what it means to be human?
How do we build technology that enhances rather than replaces human agency -- and how do we know the difference?
This is the formation stage. We're testing whether there's appetite for a community that thinks deeply about AI's implications for human agency and our responsibilities to future generations.
If these questions resonate with you, we would love to have you help shape what we become.
We're looking for founding members, advisors, and collaborators who think seriously about human agency, long-term civilisational risk, and what it means to be responsible stewards of this moment.
Whether you work in AI safety, policy, ethics, or simply care deeply about our collective future -- we want to hear from you.