21 Comments
User's avatar
Nicholas Bronson's avatar

Ever see an 80s movie, the greed is good era business ones? The trope of the "busy businessman" who gets his secretary to buy anniversary gifts for his wife and neglects his kid exists for a reason - we all instinctively know what a dick he is, and how awful a way that is to live your life. He isn't a hero for it and even then we didn't admire him for it.

AI is going to provide new ways to disengage and automate parts of your life. It -could take over talking to friends and engaging with your kids but it won't, because we won't want to use it that way, any more than we wanted clippys help in the 90s. Just because big tech haven't worked out how to make it appealing yet and are rolling out ridiculous claims showing how little they understand people doesn't mean it's time to panic.

Constant hang wringing about what will happen to us, as if we already possess no agency to choose, is no different from the ridiculous claims that the big tech are making about AI. The reality will be, as always, somewhere in the middle.

Expand full comment
Stephen Moore's avatar

I’m not saying it will happen — I’m saying it’s what Big Tech wants, and it will be bad for society if we allow it to/accept it. I agree with you in that there was always be the resistance (like the reaction to AI hardware that wants to be your “friend” or record your daily life) and that gives hope. But there is defo a part of society that does want to integrate it into their lives because they believe it’s going to improve any aspect to which it’s involved.

Expand full comment
Nicholas Bronson's avatar

Big Tech wants a lot of things, so far it's had limited success because it hasn't found that many compelling use cases for it. Those areas where it has, there's been significant uptake - A.I. as a coding assistant is a game changer, particularly for a senior developer. It can increase productivity by orders of magnitude.

The fact that A.I. can help me throw up and refactor code quicker than I ever could before doesn't mean I've forgotten how to do it manually, any more than learning to use a fancy full-featured IDE like Visual Studio caused me to forget how to code using notepad and a command line compiler - I can still do it and always have been able to; I just don't see any great moral value in making something harder for myself just because I can.

I'm no A.I. utopian, there are plenty of massive problems coming down the line for us if we're not careful - and i'm definitely no apologist for Big Tech. There have been quite a few articles like these recently whose key premise seems to be that because of A.I. we'll all become less human, less capable and dependent.

It's a view of people as helpless children and I think it does them a disservice.

Expand full comment
Stephen Moore's avatar

I always think it’s generational. I was a generation who grew up without phones (got them early teens) and my nieces have all grown up with iPads in their hands from day one. It’s made for interesting differences, maybe good and bad, and it would be the same with AI automation— you know how to code. But if you grow up coding with agents, you’ll have less idea of how to do it yourself (say if the whole system went down it went back to manual — god help kids these days if technology went down)

Expand full comment
Nicholas Bronson's avatar

That's possible... Like you, I am a generation before mobile phones (hence why I can code with notepad I guess), when I was managing developers I was constantly frustrated by how many of them couldn't do things that I took for granted, because they'd never had to.

However that is part of the technology cycle. In software developed it's happened a dozen times, as we moved from coding via soldering wires, to paper tape, to direct machine code, to more abstract assembly code, to higher level code, to interpreted code - at each level the skills change and the newer programmers couldn't do all of the things the previous ones could. You'd be hard pressed to find an average commercial developer who knows anything about memory optimisation these days for instance - it's not necessary in 99% of coding jobs. Only those who end up working on embedded systems have to relearn such things.

As annoying as that can be for someone who does have these skills, and as easy as it is for us to get caught up in "back in my day" isms, the bottom line has been that each of these steps has brought more benefits than deficits. Sure the average programmer can't optimise assembly-code drivers anymore, but the code I threw together in an afternoon to test a few benchmarking ideas for LLMs would likely have taken somewhere between a week and a month if I was working in assembly code. I can still remember sitting up all night combing through hundreds of lines of assembly needed for even simple calculations.

A.I., because it feels more human than most technology, has excited a sort of moral panic. But it's not AGI, it's not sentient, it's just another technology, with uses and potential drawbacks. It's not magic and, purely on its own terms, isn't any more dangerous than any other technology in the hands of someone who understands it.

If we ever do reach AGI, everything changes and in radical ways we can't even imagine yet - but Kurzweil has been calling that out for decades now. The rapture of the nerds, as they say, has so far been as absent as the religious one.

Expand full comment
AlejMC's avatar

Seconding the AI assistant part. I have been on C# from the very beginnings, self learned, put me the job domain I have been in for the last 20 years.

Thanks to GitHub Copilot (which apparently isn’t even one of the bests?) I have been able to also dabble into Python and others effortlessly, while at the same time learning new C# patterns the autocompletion sometimes spits out.

When easing into a new language, ultimately I don’t care what random rule is there, if to use semi colon or not in line endings, whether spaces or tabs, etc or what the correct syntax is to include other library files.

“Just put whatever the language designers wanted a for-loop to be like, I don’t care about that, I care about this list I’m filling up and want to iterate on it”… is the way I see it.

Expand full comment
AlejMC's avatar

On the choosing part, I would definitely let the AI agents do the dishes, the laundry, even go to the grocery and back on difficult days…

Why? Because I would like free time to, let’s say, learn how to draw. Something that AI also excels at but I won’t be choosing them to do… or maybe use them as a convenient teacher to aid in the learning “show me how this pose could be drawn step by step” at most.

Like you say, I doubt people will use it to go be an ass to their friends for saying one thing.

Expand full comment
Trevor's avatar

Not sure that AI wants our agency anymore than newspapers, tv, religions, banks, political parties, corporate pharma and foods, etc, have already done. It’s always been up to us to make clear decisions for ourselves. We either lie down and let them use us or harness agency to make informed choices. (I’m not a leftie or conspiracy theorist, in fact quite the opposite, just think we all have a brain and should use it).

Expand full comment
Stephen Moore's avatar

It’s not AI itself that wants our agency — it’s the one creating it that do. And yes, I agree we should use our damn brains as much as possible! But many seem convinced that AI is net benefit and attaching it to as much of our lives is a good thing. I disagree with that

Expand full comment
Jan Andrew Bloxham's avatar

We are slipping from denial directly into apathy. Stopping now would require an awakening, a paradigm shift of consciousness, an almost religious principle of “all technology is bad” - which it, of course, isn’t.

The only way for politicians and leaders to lead properly is value-driven. It’s a question of character, mostly intellectual honesty and empathy, which in turn leads to all the good stuff: integrity, honesty, wisdom, compassion, courage, pragmatism, realism, rationality, intelligence, fairness, sustainability e.t.c.

Problem is, some of that empathy keeps the good people from ruthlessly exterminating bad people. Not necessarily with violence but just by dismissing them without a fair trial. Too much empathy clouds judgement and causes self-defeating irrational behaviour.

“The paradox of tolerance is that one must be intolerant of the intolerant” - Popper

It goes without saying that truly good people constantly introspect to keep their own malfunctions and biases in check.

Expand full comment
Ked's avatar
Feb 9Edited

Luddite is desperate😂

Expand full comment
Elena P's avatar

If algorithms only show us what we already love, how do we break free?

With AI agents tailoring our feeds we risk reinforcing confirmation bias and getting trapped in echo chambers. The real challenge is asking: are we ready to step out of our comfort zones, seek differing views, and see both sides of every story?

Expand full comment
James daSilva's avatar

I'm undecided on the impact of AI agents, especially outside back-office business functions where they might succeed without fanfare, a la machine learning use cases.

But I wonder whether this ship sailed in the late 2000s/early 2010s with smartphones. This passage describes the iPhone and apps, IMO, more than AI agents:

"We'll become dependent on them to live and operate, and we'll soon enough be fully wired into systems that are controlled and owned by monopolies that don't have our best interests at heart."

Expand full comment
Curiosity Sparks Learning's avatar

Stephen, have you read the recently book SuperAgency by Reid Hoffman and Greg Beato? I'd like to hear your reflections on it.

Expand full comment
Aaron KJ Marcus's avatar

I hope it all goes the way of the Internet of Things.

Expand full comment
Frederick Woodruff's avatar

I blame the invention of electric knives.

The way this shit captivates people’s imagination is testament to that old saying: “Advertising is selling stuff to people that they don’t need or want.” But then as you point out, the algorithm is the latest iteration of that con.

Everyone I know says the same thing: I don’t want AI. I don’t need AI. I’m not interested in AI. Why is this being crammed down my throat?

Or maybe that’s a generational thing. Not sure.

I sense a reckoning coming; the first intimations of it is the insanity of Elmo Musk buzz-sawing through the Treasury dept. I know that doesn’t represent AI exactly but people have conflated him with all of this bullshit. May they both go down together.

Expand full comment
Stephen Moore's avatar

Yeah. I wonder if AI agents are a bust… that might put a pause on the whole thing. Maybe it goes a little underground again while the real builders figure out something more useful and applicable to society at large

Expand full comment
CansaFis Foote's avatar

…given there is no walking away from what THEY intend to do…and mass adoption of resistance is fairytale at best…what do you imagine is a way to build an alternative future realistically?…

Expand full comment
Stephen Moore's avatar

No idea. It’ll just be resistance, and having to accept you’ll probably be seen as a bit of an outsider 😂

Expand full comment
User's avatar
Comment removed
Feb 7
Comment removed
Expand full comment
Stephen Moore's avatar

My concern is lots of life is repetitive - chores, work, cooking, going to the gym, etc etc, so where does the line get drawn? It’ll just with mundane stuff like job applications, but after they’ve done that and still need to expand to make more $$$, what then?

Expand full comment