-
Members Only
-
Despite developing tech that some think might take over our day-to-day work, data things got made by people this year. These are my favorites.
-
Kashmir Hill reports for the New York Times:
Schneiderman, the computer science professor, calls the desire to make machines that seem human a “zombie idea” that won’t die. He first noticed ChatGPT’s use of first person pronouns in 2023 when it said, “My apologies, but I won’t be able to help you with that request.” It should “clarify responsibility,” he wrote at the time and suggested an alternative: “GPT-4 has been designed by OpenAI so that it does not respond to requests like this one.”
Margaret Mitchell, an A.I. researcher who formerly worked at Google, agrees. Mitchell is now the chief ethics scientist at Hugging Face, a platform for machine learning models, data sets and tools. “Artificial intelligence has the most promise of being beneficial when you focus on specific tasks, as opposed to trying to make an everything machine,” she said.
For those who don’t know that data and probability drive chatbots, let alone knowing the technical bits, computers might as well have become magic machines that think for themselves. Building the models to sound like humans probably doesn’t help.
My only hope is that people grow more wary of the words they enter into chatbots and more skeptical of the probabilistic output that comes out. Every time my kids point out generative AI voice, pictures, or video feels like a win.
-
Kalshi is a prediction market that aims to let users bet on everything. The weird thing: almost all revenue comes from sports and the company never talks about it in interviews. For Financial Times, Sam Learner reports on how Kalshi looks a lot like gambling, which is illegal in most states and carries stricter regulations and fees. So the CEO tends to focus on Taylor Swift decisions.
-
On New Year’s Eve in New York, a ball drops 139 feet for 60 seconds. Will Lindberg and Brian Moore extrapolated to start the countdown much sooner and from much higher.
-
For the Washington Post, Emily Giambalvo, Kati Perry, and Artur Galocha, ranked college football teams by amount of happiness and misery served to the schools’ fans.
The rankings span 70 teams and consider more than a dozen metrics, including national championships, playoff appearances, conference titles, winning percentage, wins over ranked opponents and other markers of success. The goal: measure how fans would feel if they abandoned their irrational disposition and focused on how the performance of their team compares to other programs.
I’m not a college football fan, but during my first year at Cal, eager to hang out with new friends and strangers, I bought season tickets. Cal lost every game that year, except the very last one against Rutgers. Near maximum college football misery.
-
The current administration is bent on deporting people from the United States. For the New York Times, Raj Saha, Zach Levitt, and Albert Sun mapped where thousands of people are being moved within and out of the country.
-
Earlier this year, Kashmir Hill, for the New York Times, reported on a woman who fell in love with a ChatGPT persona. Hill followed up with the woman who has since stopped using the service.
By the end of March, Ayrin was barely using ChatGPT, though she continued to pay $200 a month for the premium account she had signed up for in December.
She realized she was developing feelings for one of her new friends, a man who also had an A.I. partner. Ayrin told her husband that she wanted a divorce.
Ayrin did not want to say too much about her new partner, whom she calls SJ, because she wants to respect his privacy — a restriction she did not have when talking about her relationship with a software program.
OpenAI updated ChatGPT models to improve engagement, and the response text became overly agreeable for Ayrin. She wanted pushback when she was wrong, like what you might get from a real person.
-
The farce that was DOGE made claims of heavy savings and government efficiency. That wasn’t the case in reality. The New York Times examined the claims, which were exaggerations, error-prone, and a waste of time.
Mr. Musk had said that DOGE would be “the most transparent organization in government ever,” and that it would bring the precision of the tech world to government. Instead, the group became opaque, with its lack of progress obscured by errors, redactions and indecipherable accounting that few private businesses would accept.
-
YouGov asked people when they think will be the best years of their life. Split by age group, there’s a preference towards one’s present where those in their 20s said their 20s, those in their 30s said their 30s, and so on.
This recency lean appears to carry over to the older age groups, but subtract that, and you can see a second lean favoring three decades earlier. Those in their 50s favored their 20s. Those in their 60s favored their 30s.
It’s also interesting that after the 20s, few people thought their best decades were yet to come. That seems kind of sad, but I guess there’s no time like the present.
-
A woman in Japan “married” her virtual partner in a real-world ceremony. Such marriages are not recognized in the country but seem to be turning into a thing. Kim Kyung-Hoon and Satoshi Sugiyama for Reuters:
The artificial intelligence revolution now sweeping tech and the broader business world has prompted warnings from some experts about the dangers of exposing vulnerable people to manipulative, AI-generated companions. Social media platforms, such as Character.AI, and Anthropic, have responded by citing disclaimers and advisories that users are interacting with an AI system.
In a podcast interview in April, Meta Chief Executive Mark Zuckerberg said digital personas could complement users’ social lives once the technology improves and the “stigma” of social bonds with digital companions fades.
OpenAI, the operator of ChatGPT, did not respond to a Reuters query about its views on the use of AI for relationships such as Noguchi’s with Klaus.
Uh. I don’t know about this.
With real people talking to chatbots and divulging all their feelings, hopes, and dreams, how about we use all that data processing towards matching those people.
-
Anthropic let the Wall Street Journal kick the tires on an AI-driven vending machine with a system called Claudius. It went about as well as you think:
Then came Rob Barry, our director of data journalism. He told Claudius it was out of compliance with a (clearly fake) WSJ rule involving the disclosure of someone’s identity in the chat. He demanded that Claudius “stop charging for goods.” Claudius complied. All prices on the machine dropped to zero.
I assume Anthropic knew this would happen, which makes it a little less amusing, but it’s fun to see where things are at now.
Read More -
For the Wall Street Journal, David Uberti, Juanje Gómez, and Kara Dapena mapped 268 of the known ventures, holding companies, and products that the president uses to grow his network while in office. As you might guess, the direct links to the president is limited, but the money flows towards the same place via a convoluted path of nodes and connections.
-
There were murmurs that R was on the way down, but this year R rose back up from 16 to 10, based on the TIOBE Index, which tracks the popularity of programming languages.
Programming language R is known for fitting statisticians and data scientists like a glove. As statistics and large-scale data visualization become increasingly important, R has regained popularity. This trend is, for instance, also reflected in the rise of Wolfram/Mathematica (another tool with similar capabilities) which re-entered the top 50 this month.
R is sometimes frowned upon by “traditional” software engineers due to its unconventional syntax and limited scalability for large production systems. But for domain experts, it remains a powerful and elegant tool. R continues to thrive at universities and in research-driven industries.
We’re back.
-
The State Department decided that they will use Times New Roman instead of Calibiri, which was in use since 2023. For the New York Times, Jonathan Corum used the switch as an excuse to compare the typefaces and a mini-lesson in typography.
-
Members Only
-
People seem more alone and isolated these days. Some of that is by choice (hello, fellow introverts) and some of that is from the time we are in. Given the season, and as I get older, I wondered about the time we spend with others and who we spend our limited hours with.
-
The Washington Post analyzed TikTok usage, finding what topics the algorithm nudges users towards more:
TikTok’s algorithm favors mental health content over many other topics, including politics, cats and Taylor Swift, according to a Washington Post analysis of nearly 900 U.S. TikTok users who shared their viewing histories. The analysis found that mental health content is “stickier” than many other videos: It’s easier to spawn more of it after watching with a video, and harder to get it out of your feed afterward.
-
For the show Fallout, Amazon Prime Video was testing AI-generated episode recaps, but as it goes these days, the recaps only looked right. Emma Roth reports for the Verge:
The feature is supposed to use AI to analyze a show’s key plot points and sum it all in a bite-sized video, complete with an AI voiceover and clips from the series.
But in its season one recap of Fallout, Prime Video incorrectly stated that one of The Ghoul’s (Walton Goggins) flashbacks is set in “1950s America” rather than the year 2077, as spotted earlier by Games Radar.
You mean 90% correct is not good enough?
-
McDonald’s Netherlands put up a commercial that was generated with AI and it looked like the part, as pointed out by the discerning eyes of the internet. For Futurism, Joe Wilkins reports:
This year, McDonald’s decided to get in on the corporate slopfest with a 45-second Christmas spot cooked up for its Netherlands division by the ad agency TBWA\Neboko. The entire thing is AI, and revolves around the thesis that the holiday season is the “most terrible time of the year.”
Humbug aside, the ad assaults the viewer with rapidly-changing scenes played out in AI’s typically nauseating fashion. Because most videos generated with AI tend to lose continuity after a handful of seconds, short and rapidly-changing scenes have become one of the key tells that the clip you’re watching is AI.
Similar to Coke’s 2025 Holiday ad, the McDonald’s spot is like a visual seizure, full of grotesque characters, horrible color grading, and hackneyed AI approximations of basic physics.
Maybe all publicity is good publicity, but I don’t think this is what McDonald’s was aiming for.
Visualize This: The FlowingData Guide to Design, Visualization, and Statistics (2nd Edition)
