I love this sentiment from Jeffrey Zeldman with respect to how the proliferation of words ultimately degrades meaning:
The web taught us to fill space. AI finished the job. Content covers every surface now, every silence anxious to be noise. Learn to be quiet on purpose.
This site is closed on Sundays. I’m trying to avoid screens at least one day out of the week, and this is my way of encouraging others to consider doing the same. I’d apologize for the inconvenience, but I think in many ways modern expectations of convenience have gotten way out of hand, don’t you? Feel free to bookmark this page and come back another day, and I hope you’re able to step away from the screen and find another way to enjoy your Sunday.
Love it. Has me thinking about ways to add a bit of subversion to this site 🤔
The complete and utter failure of the metaverse is a reminder not just of the fact that the future Silicon Valley is force feeding us is not inevitable, but that quite often these oligarchs quite simply cannot relate to real people, don’t know how or why people use their products, and very often have no idea what they’re doing.
The sentiment in this quote is what helps me sleep through the night with respect to AI & LLM fetishization. The future we’re being force fed is not for certain.
Steve Magness makes the case that reading a book is the most counter-cultural thing we can do right now:
Perhaps it’s because a book asks something of you that almost nothing else in modern life does: it asks you to stay with one idea, in one place, for an extended period of time. It asks you to wrestle with complexity, pay attention to someone else’s thinking, and ultimately think for yourself.
I see the indie web as an antidote to the barrage of hollow oranges being served to us. Niche, personal websites. Digital artisans. There’s a wave of ‘returning to craft’ on the horizon and it’s carrying ample supply of ripe, juicy oranges.
…studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media. We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet.
I don’t publish on the internet for a living, but we need a healthy, human-centered internet that can support those who do. Find the humans who publish things of value and pay them directly.
An internet — and, increasingly, a mass media landscape — populated by video content does not have any room for context. It’s a school cafeteria. There’s no sense as to why anyone is talking about anything and, honestly, even trying to find out why, as I’ve just done here, is probably a buzzkill.
This website feels like the before times, in that fun subversive way we used to critique culture before culture was flattened by the algorithm. Basically, the site let’s you ask other random humans the questions you’d ask ChatGPT. Like Chat Roulette for your mind.
Scott Belsky writes about the promise and vitality of ‘Superhumanity’ in a world that’s becoming ever-obsessed with artificial intelligence. Several of the ideas in this piece resonate with me.
First, I think Scott’s definition of taste as a combination of INPUTS, FILTERS and DISCERNMENTS is really smart. As AI evolves, humans will remain tastemakers. How we lean into the experiences we seek out (INPUTS), the things we actively choose to ignore (FILTERS) and the decisions we make (DISCERNMENTS) based on our inputs and filters will be the key to thriving in a post-AI world.
He rightly points out that establishing human taste will not be enough. We will need to activate our human agency to act upon our tastes. This often resembles – and in the post-AI world it should continue to resemble – audacity. Our human-centric audacity that we can achieve the impossible or be the first to accomplish something. AI can only know the past, but humans can envision a future.
I also thought his jazz-based approach to using AI is unique and worth considering:
You must engage AI with flexibility rather than having a fully formed sonata in your head and no willingness to deviate from it. You must discover the “instruments” AI is best at, and you must complement AI with what it lacks - your taste, agency, and natural human tendencies.
I highly recommend this piece, as well as Scott’s other writing, for anyone who thinks critically about technology and our human experience living with it.
Julia Bensfield Luce writing for the BBC about lost digital images from the early aughts:
There’s a black hole in the photographic record that spans across our entire society. If you had a digital camera back then, there’s a good chance many of your photos were lost when you stopped using it.
In the beginning, there was the pure thing. Then came corruption, commercialization, normies, and death. We’re all walking around with these little creation myths about every domain we care about, and they all end the same way: with us as witnesses to a decline that began right after we showed up.
Culture is cyclical. I think it’s helpful to think about culture not as a binary between alive and dead, but rather an evolution from one point to another. Culture builds on culture builds on culture. It’s progressive, derivative and organic.
Pittsburgh’s Public Source investigates the uptick of Mister Rogers deepfakes permeating the social internet:
Lobbing curse-laden insults with TV’s famously serene painter Bob Ross. Cracking jokes about school shootings. Being escorted in handcuffs by federal authorities. No, it couldn’t be Pittsburgh’s beloved icon Mister Rogers — the picture of moral clarity and togetherness. But it sure looks and sounds like him. What gives?
Show me one useful, positive output from these deepfake engines. I’ll wait. What value do they contribute to our lives? How do they improve the world? Again, I’ll wait.
These image and video generators are the bottom feeders of this AI bubble. They provide no respectable use and no societal value. They are detrimental theft machines.
The ringer was off, but the sudden vibration was jarring. It was the fourth such vibration in as many minutes. The buzzing phone, lying face down on my desk sent tremors through my forearms as those muscles powered keystrokes from my hands.
This particular notification was a mother on Adeline’s soccer team alerting the team chat app that her daughter wouldn’t be at training that evening. The three previous notifications included a retail store marketing an upcoming sale, a mention from Mastodon, and a text message from my health provider requesting feedback from a recent visit.
Four disruptions in four minutes, my phone had become an unforgiving foghorn. These four minutes were not a one-time exception. Never-ceasing notifications had become the rule – my reality – over time. Once deliberate and discerning with the tentacles I allowed to dictate my attention, my guard had eroded and they were slipping through like seepage.
I’m not sure what it was about these four notifications or this particular four minute window, but they created an awareness toward how these FOMO-driven flechettes have been impacting my presence with and focus on the tasks, ideas and people immediately before me. I noticed myself feeling splintered in this moment. Pulled apart. Traction lost.
In this instant of self realization, I opened the settings on my phone, disabled all notifications except for text messages and phone calls from my immediate family, and saved these changes as a custom focus option I now call Mental Hygiene. This has been my default in recent weeks and it’s improved my quality of life greatly.
FOMO – or fear of missing out – is an interesting cultural abstraction. Technology has conditioned us for speed, constant reachability and the need to always be aware of the latest updates, otherwise we’re left behind.
But filtering out unwanted noise is not being left behind. It is prioritizing attention on what matters. It’s protecting a level of focus that becomes rarer with each new notification and version update.
We should not fear of missing out. Instead, let’s normalize a freedom of missing out. A freedom to let the insignificant and immaterial slide into the ether unnoticed. A freedom to be bored or reflective. A freedom that honors stillness and slowness. A freedom that empowers a focused mind, time spent with meaning, and whole presence in any given moment.
Sir Tim Berners-Lee writing in The Guardian about why he gave the World Wide Web away for free, how we might instill that ethos back into broader digital culture, and the dangers of an unregulated & unchecked AI industry:
Somewhere between my original vision for web 1.0 and the rise of social media as part of web 2.0, we took the wrong path. We’re now at a new crossroads, one where we must decide if AI will be used for the betterment or to the detriment of society. How can we learn from the mistakes of the past? First of all, we must ensure policymakers do not end up playing the same decade-long game of catchup they have done over social media. The time to decide the governance model for AI was yesterday, so we must act with urgency.
For the open social web to thrive, we need to go back to real communities with real-world use cases and solve their problems better than anything else. Not the needs of individuals within them, but of the interconnected communities themselves. We need to build social networks that deeply support their needs, and then social media that helps them thrive.
The WiFi unexpectedly went out at my house last Friday. It was completely random — working fine one minute, then zero connectivity the next. I restarted my router a few times and checked all my connections. No luck. I couldn’t spend much time troubleshooting because I was on a deadline for work, so I left the house and worked the remainder of the day from a neighborhood coffee shop.
The technical issue that brought down the WiFi isn’t important. What’s important is that it was something I couldn’t fix myself and the provider needed to send out a technician to resolve it. The extra-important part is that they couldn’t put us on the schedule until the following Tuesday. This meant we’d have no internet at home for five days.
As an elder millennial, the thought of an offline extended weekend excited me. I remember well, and often long for, pre-internet living. This wasn’t the case for my 13-year-old who lives on YouTube or my 18-year-old ESPN freak who was on his way home from Penn State (they were on a bye) to visit for the weekend.
What transpired over those few offline days was special. Yes, our phones still had cellular connections, so we weren’t completely disconnected. But lack of WiFi meant our laptops remained closed, our tablets untouched, and our smart TVs dark.
Instead, we spent quality time together, mostly outside. We built a fire. We made margaritas. We took a few family walks with the dog. We cooked a Sunday football feast and watched the game using an antenna. We looked each other in the eye as we talked, and it was nice.
Those five days reinforced for me that life feels richer when I’m not constantly plugged in. Sometimes absence can create space for much needed presence. When Tuesday came and the technician completed his work, I was almost reluctant to reconnect because I now realize the connectivity I’d been missing wasn’t the WiFi at all.
Students, activists, tech whistleblowers, and self-proclaimed Luddites have been undertaking a series of actions, readings, and protests that will culminate next weekend, on September 27, at what they’re calling the S.H.I.T.P.H.O.N.E. (Scathing Hatred of Information Technology and the Passionate Hemorrhaging of Our Neo-liberal Experience) rally at the High Line in New York City.
Big tech has built machines designed for one thing: to hold your attention. The algorithms don’t care what keeps you scrolling. It could be puppy videos or conspiracy theories about election fraud. They only care that you keep consuming. And unfortunately nothing keeps people engaged quite like rage.
The executives at these companies will tell you they’re neutral platforms, that they don’t choose what content gets seen. This is a lie. Every algorithmic recommendation is an editorial decision. When YouTube’s algorithm suggests increasingly extreme political content to keep someone watching, that’s editorial. When Facebook’s algorithm amplifies posts that generate angry reactions, that’s editorial. When Twitter’s trending algorithms surface conspiracy theories, that’s editorial.
They are publishers. They have always been publishers. They just don’t want the responsibility that comes with being publishers.
For years, these companies have hidden behind Section 230 protections while operating more like media companies than neutral platforms. They’ve used recommendation algorithms to actively shape what billions of people see every day, then claimed they bear no responsibility for the consequences. It’s like a newspaper publisher claiming they’re not responsible for what appears on their front page because they didn’t write the articles themselves.
We need to be honest about what these algorithms are doing to our democracy. They’re not just amplifying existing divisions, they’re creating new ones. They’re not just reflecting polarization, they’re manufacturing it. Every time someone opens one of these apps, they’re being shown content specifically chosen to provoke an emotional response. That’s not neutral. That’s manipulation.
This isn’t a technology problem. This is a business and choice problem. These companies could change their algorithms tomorrow to prioritize accuracy over engagement, community over conflict, human wellbeing over profit. They choose not to because extremism is more profitable than moderation.
The solution isn’t to ask nicely for these companies to do better. We tried that. The solution isn’t to hope users will abandon these platforms en masse. That won’t happen as long as the network effects keep people trapped.
The solution is regulation. Real regulation. Not the performative theater we’ve seen in congressional hearings, but actual laws with actual consequences.
We need algorithmic transparency. These companies should be required to disclose how their recommendation systems work and what content they’re amplifying.
We need algorithmic accountability. When an algorithm recommends content that leads to violence, there should be consequences. And we need algorithmic choice. Users should have the right to see chronological feeds, not just algorithmically curated ones designed to manipulate their emotions.
Most importantly, we need to end the liability shield these companies hide behind. If you’re going to operate as a publisher, making editorial decisions about what content gets amplified, then you should face the same legal responsibilities as any other publisher.
Turn off the internet. Or fix it. Those are the only choices we have left. The time for hoping these companies will self-regulate is over. The time for treating algorithmic manipulation as an inevitable part of modern life is over. We know what these systems do. We know who they hurt. The only question left is whether we’re going to do something about it.
As a member of Gen X, I sometimes find myself getting nostalgic for my youth. When this happens I put on a Fugazi record or dive into an At the Drive-In live performance wormhole. That typically satisfies my urge. If you ever find me doomscrolling nostalgia-based AI slop, please just end me.
As if you needed any more reasons to delete your Spotify account, here’s an entire book describing the harm Spotify afflicts on artists and the culture of listening. It’s academic at times, but super informative and I enjoyed it. Long live Bandcamp.
Pour one out for dial-up internet from AOL, which will be officially discontinued on September 30th. Many of us cut our adolescent internet teeth back in the day to those omnipresent CD-ROMs and that glorious sound of the dial-up modem handshake. A small part of me is sad about this.
The work of a product team, when working with new technology, is to abstract away as much of this complexity as possible, so that it feels friendly and approachable to new people.
I stumbled upon this post by David J. Roth and can’t stop thinking about the concept of “brains being defeated by phones.” It so eloquently sums up the state of humanity right now and one of my biggest fears is that there’s no walking back from it.
Not physical letters, but digital letters that arrive with traditional mail’s rhythm. It’s a private group newsletter that everyone contributes to and receives. It’s intentionally slow, purposeful, and deeply gratifying — a low-stress, high-signal way to stay connected that creates meaningful moments in a social world dominated by drive-by likes and fleeting attention.
I love this concept and I’m thinking of a number of cool topics worth exploring in this small group format:
A record club where we share new additions to our collections
An adventure club that shares highlights and recaps of running, cycling, hiking or climbing endeavors
A BBQ club focused on smoker & grill experiments and recipes
If one of these ideas resonates with you, hit me up. Awesome stuff, Naz and Scott!
Update: I created a club called Get in My Earholes that asks the question, “What’s the best record you’ve added to your collection recently?” Feel free to join the club…first letter goes out on 8/9 and then every 2 weeks after that.
This informative post from A New Social does a great job highlighting the nuances and differences between bridging and cross-posting on the open web:
Notably, bridging results in more unified conversations, while cross-posted conversations are more fragmented.
My site bridges to both Mastodon and Bluesky, which is great because I never have to look at either in order to participate in conversations on both platforms.
An AI “band” is racking up hundreds of thousands of monthly listeners on Spotify. What kind of world are we living in? Soon there will just be an opaque layer of robots between all human connection.
Not to get morbid, but turning 47 yesterday started me thinking about the persistence and legacy of this site if I were to suddenly get gone. One of the main purposes of StaticMade.com for me is to leave a public mark or a detailed record of my time, thoughts and consciousness while on this planet. How might I ensure it persists if something unforeseen happens?
If you have ever paid for hosting with us, and you haven’t violated our terms of service or community guidelines, we keep your blog online forever, even after you’ve stopped paying for your subscription.
I think this is a very proactive and generous policy. So as long as there is a plan for domain management, the site should remain online in perpetuity. Thanks Manton!