When I have a basic math question I really should know the answer to, I use incognito mode so that regular Claude thinks I’m smarter than I am.
Anthropic is to OpenAI as Democrats are to Republicans.
interpret that how you will
I made and then deleted some posts criticizing Anthropic on Twitter today.
I deleted them because I hadn’t phrased them in a way I thought people who disagree would interpret correctly and I was afraid of people disliking me because of opinions I don’t have.
But I think it’s important to be open about my negative feelings towards Anthropic so here I am again on the record saying:
Anthropic is a shitty company because they are trying to build superintelligence and that is putting everyone at great risk;
And yet they are trying to get away with it by pretending to do it safely or more safely than the next guy which is false and absurd.
I do believe many of them are sincerely well intentioned. Alas that is not what counts.
I do not try that hard to maximize my positive impact on the world
When I tell people I work on AI safety it is not because I think I am going to make the AI safe
People need to stop building the unsafe AI and I cannot make them stop
I do research I think is fun and interesting and it just so happens that maybe my research will be useful to people who want to do AI safely
I asked a politician about the existential risk posed by artificial superintelligence, specifically the risk of human extinction
He responded by talking about the “existential risk of job loss”
Goddammit
โAI just predicts the next tokenโ sounds deflating โ until you consider what predicting the next token involves and start to ask if thereโs really such a difference between predicting and learning.
The common words in those articles, โjust,โ โsimply,โ โonly,โ are there because the argument doesnโt stand up without them.
https://www.transformernews.ai/p/the-left-is-missing-out-on-ai-sanders-doctorow-bender-bores
Capitalism is a bundle of things that tend to come together historically: free markets with enforced contracts, lending at interest, exponential technological/economic growth, and investor ownership of the means of production.
I don’t think capitalism is inherently good or bad, it’s just a pretty simple attractor state for societies which has good and bad parts. But I’ve been thinking about how automation changes things.
Democracy is fundamentally about distributed power. When human labor is essential everywhere, workers have power (they can strike, etc).
With full automation, workers have nothing to offer that capitalists need. People could theoretically go live off the land and get by, but they’d be powerless. The automated economy would dwarf anything they could produce, and the capitalists could walk all over them whenever they wanted.
So it seems like you can pick two of {automation, capitalism, democracy}, but not all three. If we want to keep democracy as we automate, investor ownership of production probably has to go.
Questions like ‘what if everyone is an investor’ etc follow and are interesting but I haven’t gotten to them quite yet. Also, this models automated human labor replacement as controllable by owners which is unlikely to be the case in practice.
At the moment, feeling very sympathetic to the folks who want to greatly extend the human lifespan, live to see Mars terraformed, etc
This is not normally how I feel