LLMs are not neutral tools, using them harms people today
I realize by saying this publicly I might be putting my own career and future opportunities at risk. I am saying it, anyway, because it is important to say.
I am a vim user. Not even NeoVim, just vim. So I like to think I have a good sense of when I am being cranky or a curmudgeon about tools. And my choice of editor doesn't impact other people, as long as I'm not so hampered by the choice that I'm slowing down the team.
The choice to use Large Language Models (LLMs) and other big "generative AI" models does impact other people. It impacts other people a lot. And that's without even getting into what people use Grok to do.
Building, training, and running these tools takes enormous amounts of energy, which is being provided by non-renewable, carbon-producing power generation. This, at a time when we continually learning that the climate situation is worse than we thought. Close to the source, the harms of the pollution these produce are mostly visited on Black and brown folks.
The people labeling training data, mostly non-white folks in the global South, are paid pennies—as gig workers, often sub-minimum wages.
These aren't theoretical harms, or big picture "consolidating power in the hands of a few" harms. These are individual, nameable /; people actively being hurt, today, by these tools existing and being used.
So, no, I don't use Claude or Cursor or GPT or Gemini or Copilot. Absolutely not Grok. It's not because I'm being stubborn about adopting new tools—and I know what that feels like, because sometimes I am. It's because I don't think a marginal—at best—productivity gain for me is worth the harms it perpetrates on others.
(And using these tools often harms the users, too! And are an economic boondoggle we will all suffer for. That's all in addition to the above.)