A Personal AI Manifesto

A Personal AI Manifesto

Most of what is said about AI is false.

Two stories dominate. In one, AI is a curse: a machine that will hollow out judgment, destroy meaningful work, and leave human beings diminished. In the other, AI is a perfect servant: a system so useful that dependence begins to look like progress.

I reject both.

The first mistakes fear for wisdom. The second mistakes convenience for freedom. One treats intelligence as contamination. The other treats human strength as obsolete.

My view is simpler.

AI is a powerful tool. It can widen human capability, lower the cost of expertise, accelerate discovery, clarify complexity, and give ordinary people forms of leverage that once belonged mostly to institutions. It can also centralize power, cheapen judgment, reward passivity, and make dependency feel natural.

That is why my deepest fear is not intelligence itself.

My deepest fear is centralization.

By centralization, I do not mean cooperation, scale, or serious safety work. I mean a condition in which a few governments, firms, or aligned institutions control the terms of access, the limits of use, and the acceptable direction of development for everyone else.

I do not want a future in which intelligence becomes a permissioned utility controlled from above. I do not want citizens treated as risks to be managed rather than adults to be empowered.

Laws have a place. Fraud, coercion, impersonation, exploitation, and other concrete harms should be addressed. But there is a crucial difference between rules that punish abuse and rules that require prior institutional permission to think, build, or experiment. I support safeguards that make systems more answerable to users. I oppose regimes that turn intelligence itself into a licensed utility.

That is why I support open models, open tools, open research, and open-source alternatives now more than ever. By open, I do not mean reckless or consequence-free. I mean systems that can be inspected, studied, modified, and used without permanent dependence on a single company or regulator’s permission. An API can be useful, but it is not the same thing as genuine openness when access can be narrowed, filtered, priced, or revoked from above.

Open systems are not automatically virtuous. They can be abused, like every powerful tool. But they remain the best counterweight to monopoly, opacity, and paternal control. They preserve room for dissent, experimentation, local judgment, and personal agency. They also allow public criticism, collective debugging, and adaptation rather than passive dependence.

That is the side I am on.

I remain optimistic because the benefits are already visible. A single person can now do research that once required a staff. A patient can understand a diagnosis that would otherwise remain sealed behind jargon. A small business owner can analyze contracts, compare options, and think more strategically without hiring a miniature bureaucracy. A curious amateur can enter technical domains that were once locked behind cost, gatekeeping, and institutional scarcity.

That matters.

Intelligence has never been distributed only by talent. It has also been distributed by access: access to tutors, editors, analysts, researchers, lawyers, engineers, and time. AI does not erase inequality. Open tools still require hardware, time, literacy, and the confidence to use them well. The already advantaged will often benefit first. That is precisely why accessibility matters: affordable compute, broad education, usable local tools, and systems designed to teach rather than merely impress. Decentralization is not a complete answer. But centralization makes the problem worse by turning intelligence into a gated resource.

That is why I do not regard greater abundance of intelligence as a curse.

I regard it as a civilizational opportunity.

But only if it remains genuinely accessible.

I want AI that makes people more capable, not more compliant. I want AI that assists judgment, not AI that trains people to stop using it. I want AI that helps individuals and small groups do serious things without begging permission from centralized systems. I want AI that makes the world more legible to the people who actually live in it.

That last point matters.

Modern life is full of systems too large and technical for ordinary people to see clearly: medicine, law, finance, bureaucracy, software, government. One of AI’s highest uses is not replacing human beings inside those systems, but helping outsiders understand them. A good tool can explain a bill, a diagnosis, a policy, a contract, a procedure, a chain of causes, a set of options. It can turn obscurity into orientation.

Legibility is a form of power.

Tools do not merely serve intentions; they train habits. I favor systems that help users verify, compare, revise, and understand, not systems that reward passive acceptance of fluent output.

So my optimism is not naive. It is disciplined.

I recognize the dangers: surveillance, manipulation, deskilling, propaganda, dependency, synthetic intimacy, and the quiet transfer of authority from persons to platforms. I also recognize a second danger: that fear of these harms will be used to justify a world in which only approved institutions are allowed to wield advanced intelligence.

I also oppose the habit of importing science-fiction apocalypse into ordinary political judgment. Real dangers should be named as real dangers. But theatrical visions of machine doom are too often used to terrify ordinary people into accepting systems of control they would otherwise reject. Fear can be sincere. It can also be cultivated. In either case, panic is a poor foundation for deciding who gets to think, build, and act.

I reject that future too.

The goal is not a world where human beings become unnecessary. The goal is a world where more human beings become capable.

I want progress without delusion. I want ambition without idolatry. I want safeguards against harm without surrendering agency. I want intelligence that is open, distributed, and answerable to human beings rather than concentrated behind institutions that do not trust them.

I do not fear a future with more intelligence in it.

I fear a future in which it is monopolized and placed out of reach of ordinary people.