Penn State University has inked a deal with Flock Safety, bringing their cameras and a new layer of surveillance capitalism to campus in State College. Jonathan Chiu does some digging for The Daily Collegian.

Startups Brag They Spend More Money on AI Than Human Employees:

Startup CEOs who are “tokenmaxxing” are bragging that they are spending more money on AI compute than it would cost to hire human workers. Astronomical AI bills are now, in a certain corner of the tech world, a supposed marker of growth and success.

This is so gross. I can’t wait for the agents at one of these companies to hallucinate an error so consequential it brings their entire business to its knees. Sadly, that’s what it will take to reverse this trend.

I love this sentiment from Jeffrey Zeldman with respect to how the proliferation of words ultimately degrades meaning:

The web taught us to fill space. AI finished the job. Content covers every surface now, every silence anxious to be noise. Learn to be quiet on purpose.

The Center for Humane Technology has published an excellent report called The AI Roadmap: How We Ensure AI Serves Humanity. It’s a very well-done, comprehensive and thoughtful proposal that centers on three key pillars required to alter the existing ecosystem: establishing behavioral norms, passing responsible legislation and introducing some level of governance over AI product design.

I’m not sure what the next steps will be for CHT after publishing this roadmap, but it seems like a solid foundation upon which to build action.

Imagine if all the money being invested to make consumer-grade LLMs more intelligent was being invested in schools and curriculum to make young human beings more intelligent.

Ridicule as praxis. Jason Koebler writing about the implosion of the agreement between Disney and OpenAI that would have brought user generated AI slop from the latter’s Sora into the streaming services of the former:

Sora is dead. May the memory of its four-month existence as a copyright infringement machine that was also used to make videos of men strangling women and ICE arresting undocumented immigrants be a blessing.

More of this please. More critical thinking. More vocal and open ridicule of these AI hype mongers. This is all snake oil.

404 Media disecting the the death of the Metaverse, an $80 Billion Dumpster Fire Nobody Wanted:

The complete and utter failure of the metaverse is a reminder not just of the fact that the future Silicon Valley is force feeding us is not inevitable, but that quite often these oligarchs quite simply cannot relate to real people, don’t know how or why people use their products, and very often have no idea what they’re doing.

The sentiment in this quote is what helps me sleep through the night with respect to AI & LLM fetishization. The future we’re being force fed is not for certain.

Elan Ullendorff describes the AI conundrum facing university students & staff in the latest Escape the Algorithm newsletter:

The large language models evolve with each waning moon, peeling off a layer of scabrous skin and beckoning the students to stay ahead of the curve. The students tremble-walk this curve like a tightrope, trying not to look down for fear that they will catch sight of the world they are setting aflame.

Jason Koebler saying the quiet part out loud:

…studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media. We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet.

I don’t publish on the internet for a living, but we need a healthy, human-centered internet that can support those who do. Find the humans who publish things of value and pay them directly.

Bernie sums it up well. From here on out, my vote goes to the person who advocates for pumping the brakes and establishing effective guardrails for AI. This is a generational issue and we must get it right.

In case you need another reason to believe Palantir and CEO Alex Karp are simply an arm for this authoritarian regime:

Karp’s message is loud and clear: My technology will take political capital away from one of your greatest enemies—liberal women with degrees—and give one of your favorite demographics to patronize—working-class men—more political power to transfer to you. He’s aligning his technology with both GOP political strategy and the larger male-centered culture war that the right has been waging for the better part of a decade now.

New to me: youraislopbores.me

This website feels like the before times, in that fun subversive way we used to critique culture before culture was flattened by the algorithm. Basically, the site let’s you ask other random humans the questions you’d ask ChatGPT. Like Chat Roulette for your mind.

Seth Godin on slop, be it human or AI-generated:

If we measure the cost of what we create instead of its value, it’s likely we’ll end up with slop.

He’s right. Slop has existed for most of human history. But human slop is the result of human labor. That’s inherently more valuable than AI slop.

Mike Montiero on the news that Block (a payments company run by Jack Dorsey) shares are soaring after it slashed staff by 50%:

Dorsey’s latest chewtoy, has laid off 4,000 people. Which made its stock rise 24%. When 4,000 people lose their livelihood, their ability to pay their rent, their ability to go to the doctor, their ability to look out for their children, and the system that we live under cheers that on… That system needs to be destroyed.

The Promise is Perception

Casey Newton in Platformer:

One of the more famous papers about artificial intelligence last year came from METR, a nonprofit that evaluates frontier AI models. In July, it published results of a randomized controlled trial studying experienced open-source developers. It found that when they use AI tools, completing tasks takes them 19 percent longer than when they go without. That was surprising enough. But the real twist is that when these same developers were asked what AI had done for them, they reported that it had sped them up by 20 percent.

This was a fascinating dive into the professional productivity that’s promised by our AI overlords. We’re starting to learn that much of this productivty is perceived. I’ve felt this in my own professional life, and have drastically reduced my use of Copilot at work because I found myself spending far too much time reviewing and correcting incorrect output from the model.

I also found many of the outputs, when accurate and correct, were just OK and simply not up to my professional standards. So much of my daily work requires communicating effectively through writing – explaining value and impact to leadership; acting as a translation layer between engineering, design and the business; and aligning stakeholders to broad, complex initiatives – all of which need to be buttoned-up to my highest standard. I’m simply not getting that quality from any AI model I’ve tried.

A Running List of Human-First Orgs

Last week I posted about the differentiation opportunity for companies and organizations who publicly lean into humanity and away from artificial intelligence. Since posting that, I’m starting to notice some examples of this in the real world.

I think it’s important to raise the visibility of Companies With Guts. Therefore, this post is will become an evolving list of organizations that take a public, pro-human, anti-AI stance. If you know of good examples, please share via email and I will update the list here ASAP.

Last updated 2026-01-21

On Superhumanity

Scott Belsky writes about the promise and vitality of ‘Superhumanity’ in a world that’s becoming ever-obsessed with artificial intelligence. Several of the ideas in this piece resonate with me.

First, I think Scott’s definition of taste as a combination of INPUTS, FILTERS and DISCERNMENTS is really smart. As AI evolves, humans will remain tastemakers. How we lean into the experiences we seek out (INPUTS), the things we actively choose to ignore (FILTERS) and the decisions we make (DISCERNMENTS) based on our inputs and filters will be the key to thriving in a post-AI world.

He rightly points out that establishing human taste will not be enough. We will need to activate our human agency to act upon our tastes. This often resembles – and in the post-AI world it should continue to resemble – audacity. Our human-centric audacity that we can achieve the impossible or be the first to accomplish something. AI can only know the past, but humans can envision a future.

I also thought his jazz-based approach to using AI is unique and worth considering:

You must engage AI with flexibility rather than having a fully formed sonata in your head and no willingness to deviate from it. You must discover the “instruments” AI is best at, and you must complement AI with what it lacks - your taste, agency, and natural human tendencies.

I highly recommend this piece, as well as Scott’s other writing, for anyone who thinks critically about technology and our human experience living with it.

Here, I fixed ChatGPT’s value prop for including advertising for you:

In the coming weeks, we’re also planning to start testing ads in the U.S. for the free and Go tiers, so more people can benefit from our tools race to the intellectual bottom with fewer usage limits or without having to pay by conflating plagiarism with marketing.

Emphasis is mine.

Humanity as Differentiator

I just spent a few days in New York City at NRF 2026, the premier conference for the retail industry in the United States. For those unfamiliar, NRF is a behemoth – nearly 40,000 retail practitioners descending on the Javits Center each January.

My primary observation: AI wasn’t just present at this year’s event, it dominated everything. Every session. Every conversation. Every exhibitor pitch. Everything was presented through an AI lens. You couldn’t avoid it, even if you tried.

And I did.

NRF 2026 wasn’t billed as a “retail & AI” conference, but that’s exactly what it was. AI optimizing your supply chain. AI promising store operations efficiency. AI running wild on product catalogs to enable agentic shopping. Robots massaging data so websites are easier for other robots to navigate.

I wasn’t surprised by the saturation. Just look around. AI is being shoehorned into our daily interactions and companies are desperate to brand themselves as AI companies. The US economy is being propped up by AI investment, and retail is claiming its share.

What struck me was the opportunity cost.

In a landscape where every company fetishizes AI – in their products, in their operations, in their marketing – there’s a massive opening for companies brave enough to lean the other direction. Into humanity. Into the natural world. Into what makes us distinctly human.

I caught glimpses of this alternative path. Ryan Reynolds’ keynote touched on it. My friend Justin Weinstein’s talk about grocers serving their communities embodied it. But these were exceptions in a sea of sameness.

Here’s what I’m imagining: What if a company stood up and publicly declared they would resist artificial intelligence in their operations? What if they said instead they’d invest in people – the people who make the company work and the people the company serves? And what if they went all-in on this message in their marketing?

That decision would instantly differentiate them in a meaningful, substantive way.

The irony of everyone chasing the same AI strategy is that it eliminates competitive advantage. When everyone optimizes for the same thing, nobody stands out. But a company that deliberately chooses humanity over automation? Right now that’s an admirable decision and a signal customers can’t ignore.

I don’t know about you, but that’s the kind of company I’d get in line to support.

Pittsburgh’s Public Source investigates the uptick of Mister Rogers deepfakes permeating the social internet:

Lobbing curse-laden insults with TV’s famously serene painter Bob Ross. Cracking jokes about school shootings. Being escorted in handcuffs by federal authorities. No, it couldn’t be Pittsburgh’s beloved icon Mister Rogers — the picture of moral clarity and togetherness. But it sure looks and sounds like him. What gives?

Show me one useful, positive output from these deepfake engines. I’ll wait. What value do they contribute to our lives? How do they improve the world? Again, I’ll wait.

These image and video generators are the bottom feeders of this AI bubble. They provide no respectable use and no societal value. They are detrimental theft machines.

We’re being asked to use AI tools more at work. I just spent 30 minutes prompting & subsequently being gaslit by CoPilot (our approved LLM) for a task that ultimately took me 10 minutes to do with actual intelligence. Is this the productivity they’ve promised us?

Sir Tim Berners-Lee writing in The Guardian about why he gave the World Wide Web away for free, how we might instill that ethos back into broader digital culture, and the dangers of an unregulated & unchecked AI industry:

Somewhere between my original vision for web 1.0 and the rise of social media as part of web 2.0, we took the wrong path. We’re now at a new crossroads, one where we must decide if AI will be used for the betterment or to the detriment of society. How can we learn from the mistakes of the past? First of all, we must ensure policymakers do not end up playing the same decade-long game of catchup they have done over social media. The time to decide the governance model for AI was yesterday, so we must act with urgency.

Cory Doctorow:

AI cannot do your job, but an AI salesman can 100% convince your boss to fire you and replace you with an AI that can’t do your job, and when the bubble bursts, the money-hemorrhaging “foundation models” will be shut off and we’ll lose the AI that can’t do your job, and you will be long gone, retrained or retired or “discouraged” and out of the labor market, and no one will do your job. AI is the asbestos we are shoveling into the walls of our society and our descendants will be digging it out for generations.

Peter Thiel says regulating AI will hasten the Antichrist, and that the devil promises peace and security through regulation of tech. How did we get here? (Rhetorical) How do we get out of here? (Not rhetorical)

Harvard Business Review says AI-generated “Workslop” is destroying productivity:

The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver.