Maybe it's because I'm a strategist but I obsessively link and source everything I write about. If I don't have proof of something I won't write it, plain and simple.
Glad you went ahead and finished this! Very timely. Sparked many, many thoughts, but in the name of tasteful discernment, I'll curate my words like so:
post-GPT ubiquity, we have to assume that everything is fluff on here, until proven substantial. Even viral long reads. Even long reads with links. Even long reads by seemingly credible people. Even long reads liked or restacked by credible people! That’s the wild part for me. So, as with any other engagement-based platform, content on here is just entertainment until proven informative. Easy to say that of course…
Makes sense! Like Thelot's essay, everything is now "default fake." That's quickly become how we treat visuals, but it feels like we're in this interim space where we haven't yet adapted to be as skeptical of longfrom writing. And agree on the research, it's like a hall of mirrors. What you're describing is also something I've found before — I don't remember the statistic now (I learned about this several years ago), but there is a very commonly cited sustainable fashion stat that is often repeated, yet if you try to find the originating source you end up endlessly clicking on links like 40 articles deep that are all just referencing each other, until getting to one where you realize the stat is totally false.
Hall of mirrors is such a good way to put it 😵💫 Yeah I'm echoing the point in Thelot’s essay pretty much! And as you say, although we’ve shifted our thinking about visual media, we’re not there yet with textual. I wonder what he's written about default fake when it comes to writing specifically. Cause that ‘am I crazy’ feeling, where you read something textually but not substantially convincing, comes with its own challenges doesn’t it. How do we judge something that's in the uncanny textual valley?
Btw you might find this interesting: https://www.theamdash.com. An anti-AI take on the em dash. Fascinating signal about the future of writing imo.
Yes thaaank you for calling this out: "there’s a dissonance between how the aspirational portrayal of the aesthetics of intellectualism and the worrisome data about a literacy crisis amongst young people." I've felt like I'm going crazy reading all the recent pieces that kinda wash over the decline of numeracy and literacy skills... or don't even include it at all.
I was working on a service journalism piece and put out a call for expert sources to interview. I received multiple responses using the same phrases -- "budget-conscious", "my unconventional tip"-- and sharing the same advice (which happened to echo the language of my source request). Polished and quote ready. How many overworked freelancers are using these "sources"?
When we search for articles instead of asking ChatGPT, we assume we're choosing humanity over AI. What if it's AI all the way down?
Yep, unfortunately I think there may be a certain imminent point of no return, where there is so much artificial information clogging the internet that it becomes impossible to distinguish what's authentic. So....back to physical libraries and archives I guess?
Loved this. Somewhat related, but I've been thinking a lot about how AI will impact a younger generation's ability to think critically. I worry because I see marketers using tools like ChatGPT and taking its outputs at face value. Like you pointed out, I don't see many people verifying information sources or even reading copy closely for structure and clarity. It's incredibly concerning. I don't know what the solution is. Teach people to use AI better while still using their brain?
Agreed, concerning. I think in general we've really slacked on teaching critical thinking (often favoring STEM over humanities for instance). That part isn't new - it's been made clear via politics for nearly a decade now. But we haven't bothered to fix the issue, and with the addition of AI accessibility it feels like we're on a rapidly accelerating downward spiral. :/
People in tech value independent/free thinking, but they all follow the same people, consume the same media, end up assuming their model / way of life is the "best" because the people they respect/work with arrived at the same conclusion,(independently of course). First principles, and all that. These are the kind of people who see with AI that as those posts mention, that taste/discernment can be the differentiator.
With AI there's a very vocal contingent online that believes that this tool has closed the gap on creativity, and that it becomes very clear to anyone with taste/discernment that it has not. Typically the kind of person who buys a checkmark, was into web3/nft's, and lashes out/gloats at anyone they believe to be less than them.
Both still miss the mark, for various reasons.
On Substack it's funny when people don't reference their sources in notes - I see so many reposts from Twitter and Tumblr, and rehashing content from blogs 5-10 years ago. But people take it at face value that it's authentic because of this implicit trust people have on anything that shows up here.
Maybe it's because I'm a strategist but I obsessively link and source everything I write about. If I don't have proof of something I won't write it, plain and simple.
Glad you went ahead and finished this! Very timely. Sparked many, many thoughts, but in the name of tasteful discernment, I'll curate my words like so:
post-GPT ubiquity, we have to assume that everything is fluff on here, until proven substantial. Even viral long reads. Even long reads with links. Even long reads by seemingly credible people. Even long reads liked or restacked by credible people! That’s the wild part for me. So, as with any other engagement-based platform, content on here is just entertainment until proven informative. Easy to say that of course…
I think the stickier side of this topic is when it comes to research. Firstly because AI-assisted writing will undermine what’s out there to quote and cite in the first place, and secondly because even the AI research tools that work, like Deep Research, have drawbacks for research quality: http://archive.today/2025.06.22-020625/https://www.economist.com/finance-and-economics/2025/02/13/the-danger-of-relying-on-openais-deep-research
Makes sense! Like Thelot's essay, everything is now "default fake." That's quickly become how we treat visuals, but it feels like we're in this interim space where we haven't yet adapted to be as skeptical of longfrom writing. And agree on the research, it's like a hall of mirrors. What you're describing is also something I've found before — I don't remember the statistic now (I learned about this several years ago), but there is a very commonly cited sustainable fashion stat that is often repeated, yet if you try to find the originating source you end up endlessly clicking on links like 40 articles deep that are all just referencing each other, until getting to one where you realize the stat is totally false.
Hall of mirrors is such a good way to put it 😵💫 Yeah I'm echoing the point in Thelot’s essay pretty much! And as you say, although we’ve shifted our thinking about visual media, we’re not there yet with textual. I wonder what he's written about default fake when it comes to writing specifically. Cause that ‘am I crazy’ feeling, where you read something textually but not substantially convincing, comes with its own challenges doesn’t it. How do we judge something that's in the uncanny textual valley?
Btw you might find this interesting: https://www.theamdash.com. An anti-AI take on the em dash. Fascinating signal about the future of writing imo.
Yes thaaank you for calling this out: "there’s a dissonance between how the aspirational portrayal of the aesthetics of intellectualism and the worrisome data about a literacy crisis amongst young people." I've felt like I'm going crazy reading all the recent pieces that kinda wash over the decline of numeracy and literacy skills... or don't even include it at all.
yep, promoting the aesthetics of intellectualism entirely divorced from actual literacy and critical thinking.
I was working on a service journalism piece and put out a call for expert sources to interview. I received multiple responses using the same phrases -- "budget-conscious", "my unconventional tip"-- and sharing the same advice (which happened to echo the language of my source request). Polished and quote ready. How many overworked freelancers are using these "sources"?
When we search for articles instead of asking ChatGPT, we assume we're choosing humanity over AI. What if it's AI all the way down?
Yep, unfortunately I think there may be a certain imminent point of no return, where there is so much artificial information clogging the internet that it becomes impossible to distinguish what's authentic. So....back to physical libraries and archives I guess?
Loved this. Somewhat related, but I've been thinking a lot about how AI will impact a younger generation's ability to think critically. I worry because I see marketers using tools like ChatGPT and taking its outputs at face value. Like you pointed out, I don't see many people verifying information sources or even reading copy closely for structure and clarity. It's incredibly concerning. I don't know what the solution is. Teach people to use AI better while still using their brain?
Speaking of which, I haven't read this yet but have it bookmarked: https://time.com/7295195/ai-chatgpt-google-learning-school/
Agreed, concerning. I think in general we've really slacked on teaching critical thinking (often favoring STEM over humanities for instance). That part isn't new - it's been made clear via politics for nearly a decade now. But we haven't bothered to fix the issue, and with the addition of AI accessibility it feels like we're on a rapidly accelerating downward spiral. :/
People in tech value independent/free thinking, but they all follow the same people, consume the same media, end up assuming their model / way of life is the "best" because the people they respect/work with arrived at the same conclusion,(independently of course). First principles, and all that. These are the kind of people who see with AI that as those posts mention, that taste/discernment can be the differentiator.
With AI there's a very vocal contingent online that believes that this tool has closed the gap on creativity, and that it becomes very clear to anyone with taste/discernment that it has not. Typically the kind of person who buys a checkmark, was into web3/nft's, and lashes out/gloats at anyone they believe to be less than them.
Both still miss the mark, for various reasons.
On Substack it's funny when people don't reference their sources in notes - I see so many reposts from Twitter and Tumblr, and rehashing content from blogs 5-10 years ago. But people take it at face value that it's authentic because of this implicit trust people have on anything that shows up here.