We’re being asked to use AI tools more at work. I just spent 30 minutes prompting & subsequently being gaslit by CoPilot (our approved LLM) for a task that ultimately took me 10 minutes to do with actual intelligence. Is this the productivity they’ve promised us?
AI
Sir Tim Berners-Lee writing in The Guardian about why he gave the World Wide Web away for free, how we might instill that ethos back into broader digital culture, and the dangers of an unregulated & unchecked AI industry:
Somewhere between my original vision for web 1.0 and the rise of social media as part of web 2.0, we took the wrong path. We’re now at a new crossroads, one where we must decide if AI will be used for the betterment or to the detriment of society. How can we learn from the mistakes of the past? First of all, we must ensure policymakers do not end up playing the same decade-long game of catchup they have done over social media. The time to decide the governance model for AI was yesterday, so we must act with urgency.
AI cannot do your job, but an AI salesman can 100% convince your boss to fire you and replace you with an AI that can’t do your job, and when the bubble bursts, the money-hemorrhaging “foundation models” will be shut off and we’ll lose the AI that can’t do your job, and you will be long gone, retrained or retired or “discouraged” and out of the labor market, and no one will do your job. AI is the asbestos we are shoveling into the walls of our society and our descendants will be digging it out for generations.
Peter Thiel says regulating AI will hasten the Antichrist, and that the devil promises peace and security through regulation of tech. How did we get here? (Rhetorical) How do we get out of here? (Not rhetorical)
Harvard Business Review says AI-generated “Workslop” is destroying productivity:
The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver.
I think it’s long past time I start discussing “artificial intelligence” (“AI”) as a failed technology. Specifically, that large language models (LLMs) have repeatedly and consistently failed to demonstrate value to anyone other than their investors and shareholders. The technology is a failure, and I’d like to invite you to join me in treating it as such.
A provocative & thoughtful take. And one worth considering as we are now years into the AI hype with little to show for it. An honest question: What value has generative AI provided to you?
Thursday, September 18, 2025 →
Brian Phillips writes a love letter to the em dash & laments the AI-shaming that’s grown online related to its use. I, too, love the em dash – and have used it in my writing for years – but I consciously started editing them out for fear of perceived AI use. No more! Em dash loud & proud!
On this Labor Day, let’s remember that most generative AI is built upon the exploits of uncompensated work and it will continue to be until there is regulation prohibiting it.
As a member of Gen X, I sometimes find myself getting nostalgic for my youth. When this happens I put on a Fugazi record or dive into an At the Drive-In live performance wormhole. That typically satisfies my urge. If you ever find me doomscrolling nostalgia-based AI slop, please just end me.
I’ve admired the work of Aaron Cope for a very long time – since he was at the Cooper-Hewitt and I was at the Carnegie Museum of Art – which I now realize measures my admiration in decades, not years. Time flies.
Anyhoo, Aaron’s now at the SFO Museum and he recently prompted several LLMs to tell him about his place of employment. He tested both open source and proprietary models, and found great disparity between them, which highlights some big questions around AI and socio-economic equity.
No model performed well and some flat-out lied. The entire recap is a must-read, but this passage gets a chef’s kiss from me:
Which begs the question: Why is Google’s open model (gemma3) so wrong? I am going to go out on a limb and suggest that the same dynamic is at play with OpenAI’s (and everyone else’s) flagship, and subscription-based, models and their open models: Accuracy is metered toll road and everything is just a mystery-meat coleslaw of signals.
He goes on:
In a nutshell, we are on our way to replicating the same environment that the collective-we have fostered around processed foods for the last 75 years – all the problems concerning availability, cost, nutrition, consequences – but with knowledge and understanding itself.
The comparison to processed foods is apt. We are finding ourselves in quite the predicament and models getting ‘righter’ over time is not the answer. I get the sense that we are walking through a one-way door and on the other side waits a perpetual diet of mental hot dogs.
“There’s not a shred of evidence on the internet that this band has ever existed”
An AI “band” is racking up hundreds of thousands of monthly listeners on Spotify. What kind of world are we living in? Soon there will just be an opaque layer of robots between all human connection.
AllTrails has a new generative AI-powered feature that let’s hikers ask for shorter or more scenic routes:
If a robot told you that walking off a cliff was your fastest route home, how close would you get to the edge before turning it off?
What could go wrong?
How I Used AI Today
Wednesday, May 28, 2025
My son is having a birthday and graduating from high school in the span of five days, so Jilly and I thought we’d do something special and get him a joint gift to celebrate both occasions. He’s very much interested in photojournalism and will be entering university in the fall to study communications. We thought a nice DSLR camera would be a the perfect gift.
I don’t know much about cameras or lenses, so I asked Claude for some help. My initial prompt:
I want to buy my son a DSLR camera for his birthday/graduation. You are an expert in photography and photography equipment. Could you help me select the right camera, lenses and bag? I’d like to spend about $X total.
Claude and I then chatted about my son’s photographic interests, his current level of expertise, and several of my purchase preferences/requirements. The output of this conversation was a tight list of three potential camera bodies w/ corresponding lens pairings.
I then asked Claude to find the best deals for two of the options and it returned the top three online retailers for both based on price, service and customer reviews.
After validating some pricing details, I made the purchase. In total, I estimate this approach saved me several hours of research and analysis paralysis, which I am known for when making purchases like this.
The camera kit arrived two days later, we gave it to him on his birthday and he used it for the first time last night to cover his school’s WPIAL title baseball game.
Note: This post is part of an ongoing series called How I Used AI Today, inspired by friend and former colleague Beck Tench who does something similar over on LinkedIn. I’m starting to believe the thinking and narrative around generative AI is becoming too binary. The intent of this series is to keep me publicly honest and intellectually responsible with my use of this emerging technology.
Greg Storey on the binary nature of AI discourse these days:
The assumption that tools passively rewire us, no matter our intent, no matter our context, no matter our discipline, is reductive at best and infantilizing at worst.
Worth a read. This is more nuanced than AI is evil / AI is the future.
How I Used AI Today
Friday, May 23, 2025
I fed Claude some examples of bi-weekly stakeholder updates for products I previously managed. I then asked it to learn the format, understand the tone of the writing, and help me draft a first installment for a new initiative I’m leading. We chatted for a few minutes about the voice I desired, recent progress by the team, and the health of the project. After I provided adequate context, Claude generated a draft for me to review. The initial version was very good and only required a few copy and formatting edits. I was happy with the result and it saved me about an hour this morning.
Note: This post is the first in an ongoing series called How I Used AI Today, inspired by friend and former colleague Beck Tench who does something similar over on LinkedIn. I’m starting to believe the thinking and narrative around generative AI is becoming too binary. The intent of this series is to keep me publicly honest and intellectually responsible with my use of this emerging technology.