Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Last week on, “Please do not build the torment nexus”
No, it never works out in the movies, I mean somehow these poor delusional humans convince themselves that they can control the kill bots…
Because the Terminator franchise is from Hollywood, the heroic resisters to SkyNet are Americans. Someone didn’t watch the movies very well.
OFC the OG Terminator was a scary foreign man with a pronounced accent…
Good to know that she still has space on her war crime scorecard
what fresh hell is this
Cloudflare Introduces NET Dollar stable coin (HN link: https://news.ycombinator.com/item?id=45471573)
I know it’s terrible being a drama gossip, but there are some Fun Times on bluesky at the moment. I’m sure most of you know the origins of the project, and the political leanings of the founders, but they’re currently getting publicly riled up about trans folk and palestinians and tying themselves up in knots defending their decision to change the rules to keep jesse singal on site, and penniless victims of the idf off it.
They really cannot cope with the fact that their user base aren’t politically aligned with them, and are desperate to appease the fash (witness the crackdowns on people’s reaction to charlie kirk’s overdue departure from this vale of tears) and have currently reached the Posting Through It stage. I’m assuming at some point their self-image as Reasonable Centrists will crack and one or more of them will start throwing around transphobic slurs and/or sieg-heiling and bewailing how the awful leftists made them do it. Anyone want to hazard a guess at a timeline?
Well, they’re already using “waffles” as a transphobic slur (irony poisoning speedrun any%), so it’s really more of a question of which transphobic slurs they’ll escalate to next.
And all this because she simply could not shut up. Which seems to be one of the oldest, modding a large community rules.
Nobody ever seems to learn the “never get high from your own supply” lesson. Gotta get that hit of thousands of people instantly supporting and agreeing with whatever dumbfuck thought just fell out.
You absolutely don’t have to hand it to zuckerberg, but he at least is well aware that he runs an unethical ad company that’s bad for the world, has always expressed his total contempt for his users, and has not posted through it.
whenever there’s a huge scandal he listens to his PR people, keeps his mouth shut, then goes on a media tour where he lies his ass off in exactly the same way at each stop. it’s that simple and yet almost no other tech people can do it
You absolutely don’t have to hand it to zuckerberg, but he at least is well aware that he runs an unethical ad company that’s bad for the world, has always expressed his total contempt for his users, and has not posted through it.
Its an extremely low bar to clear, but I’ll begrudgingly hand it to him for being one of the few tech CEOs who didn’t actively limbo under it.
here’s a good summary of the situation including an analysis of the brand new dogwhistle a bunch of bluesky posters are throwing around in support of Jay and Singal and layers of other nasty shit
here’s Jay Graber, CEO of Bluesky, getting called out by lowtax’s ex-wife:
here’s Jay posting about mangosteen (mangosteen juice was a weird MLM lowtax tried to market on the Something Awful forums as he started to spiral)
Anyone want to hazard a guess at a timeline?
since Jay posted AI generated art about dec/acc and put the term in her profile, her little “ironic” nod to e/acc and to AI, my guess is this is coming very soon
This doesn’t include her blurb about, “are you paying us? where???”
But weren’t there a multitude of people clamoring for a Bluesky subscription service from the get-go? Out of recognition that this situation was one of the potential failure modes?
The question on getting paid might give credence to the rumors that they’re running out of money and won’t make it (user-growth wise) as an ad platform. Which, lol and also lmao.
Yeah, I can’t see that they’re doing anything besides burning runway. It’s probably shortly going to become cliche to say, “glad I never made an account there,” but, welp, glad I never made an account there
Huh, didn’t know about the guy behind tangled.org (Anirudh Oppiliappan) being a waffle enthusiast too 🫤 Just visited his bsky profile, and he’s enthusing about a “decentralised accelerationism” post by jay.
I hadn’t really seen the point of the tangled project (I’m not sure what atproto brings to version control) but I was interested in an ecosystem around the jujutsu vcs stuff. I guess I won’t find that here.
ok plz explain what “waffles” means in this context
TL;DR: It’s all a meme-poisoned bluesky circle-jerk involving transphobes and people who desperately want transphobes to think they’re cool.
The link from self to mcc’s bluesky thread sums up this stuff with references, but to attempt to summarise the summary, there’s a tweet from the depths of time (2017) that says
Twitter the only place where well articulated sentences still get misinterpreted. You can say “I like pancakes” and somebody will say “So you hate waffles?”
Bluesky ceo jay graber uses “waffles” as shorthand referencing this post, and whipped it out when asked about bluesky’s ongoing unwillingness to do anything about noted transphobe jesse singal, who has since posted about how much he loves waffles.
jfc
Thanks for taking the time to expand
While most bullish outlooks are premised on economic reacceleration, it’s difficult to ignore the market’s reliance on AI capex. In market-pricing terms, we believe we’re closer to the seventh inning than the first, and several developments indicate we may be entering the later phases of the boom. First, AI hyperscaler free-cash-flow growth has turned negative. Second, price competition in the "monopoly-feeder businesses” seems to be accelerating. Finally, recent deal-making smacks of speculation and vendor-financing strategies of old.
"You know, I never defrauded anyone,” says Sam Bankman-Fried
“You know, I never sent the boys across the Isonzo without believing we could win,” said Luigi Cadorna
I predict Sam will lose big on the S&C claims. They have made fuckin bank on this bankruptcy, but also their fees have already been ruled entirely reasonable given the shitshow in question, and the near complete recovery for creditors will make them look even better.
Guys according to LW you’re reading Omelas all wrong (just like LeGuin was wrong)
https://www.lesswrong.com/posts/n83HssLfFicx3JnKT/omelas-is-perfectly-misread
Funny to type all that up and also go ‘Le Guin Disagrees’
(not that having a different read than the author intended is strange, I mean I have a dystopian reading of Starship troopers which I think makes much more sense than what was intended).
I think I read Starship Troopers before I saw the movie, because a small scene reveals that Johnny Rico is of Phillipine descent (his mother tongue is Tagalog) and I remember wondering if that would be part of the movie. Samuel R Delany mentions that scene as something that made him felt included in SF.
I was very young when I read it but even then I could read it as proto-fascist (or rather military-authoritarian, a bit like cod-Roman Republic)
Choice sneer from the comments:
Omelas: how we talk about utopia [by Big Joel, a patient and straightforward Youtube humanist,] [has a] pretty much identical thesis, does this count?
Another solid one which aligns with my local knowledge:
It’s also about literal child molesters living in Salem Oregon.
The story is meant to be given to high schoolers to challenge their ethics, and in that sense we should read it with the following meta-narrative: imagine that one is a high schooler in Omelas and is learning about The Plight and The Child for the first time, and then realize that one is a high schooler in Salem learning about local history. It’s not intended for libertarian gotchas because it wasn’t written in a philosophical style; it’s a narrative that conveys a mood and an ethical framing.
One of the many annoying traits of rationalists is their tendency to backproject classic pieces of literature onto their chosen worldview.
So bit of a counter to our usual stuff thing. But a worker migrant here won a case against his employer who had linked his living space to his employment contract (forbidden) using chatgpt as an aid (how much is not told). So there actually was a case where it helped.
Interesting note on it, these sorts of cases have no jurisprudence yet, so that might have been a factor. No good links for it sadly as it was all in Dutch. (Cant even find a proper writeup in a bigger news site as a foreigner defending their rights against abuse is less interesting than some other country having a new bisshop). Skeets congratulating the guy here https://bsky.app/profile/isgoedhoor.bsky.social/post/3m27aqkyjjk2c (in Dutch). Nothing much about the genAI usage.
But this does fit a pattern, how, like with blind/bad eyesight people, these tools are veing used by people who have no other recourse because we refuse to help them (this is bad tbh, Im happy they are getting more help don’t fet me wrong, but it shouldn’t be this substandard).
LLMs are the Dippin Dots of technology.
That’s an unfair comparison, Dippin Dots don’t slowly ruin the world by existing (also, they’re delicious)
It will always be the ice cream of the future, never the ice cream of today
The Guardian shat out its latest piece of AI hype, violating Betteridge’s Law of headlines by asking “Parents are letting little kids play with AI. Are they wrong?”
Do we have a word for people that are kind of like… AI concern trolls? Like they say they are critical of AI, or even against AI, but only ever really put forward pro-AI propaganda, especially in response to actual criticisms of AI. Kind of centrists or (neo) libs. But for AI.
Bonus points if they also for some reason say we should pivot to more nuclear power, because in their words, even though AI doesn’t use as much electricity as we think, we should still start using more nuclear power to meet the energy demands. (ofc this is bullshit)
E: Maybe it’s just sealion
sealAIon
Sealions is a bit more specific, as they do not stop, and demand way more evidence than is normal, Scott had a term for this, forgot it already (one of those more useful Rationalist ideas, which they only employ themselves asymmetrically). Noticed it recently on reddit, some person was mad I didn’t properly counter Yuds arguments, while misrepresenting my position (which wasn’t that strong tbh, I just quickly typed them up before I had other things to do). But it is very important to take Yuds arguments seriously for some reason, reminds me of creationists.
Think just calling them AI concern trolls works.
Been tangentially involved in a discussion about how much LLMs have improved in the past year (I don’t care), but now that same space has a discussion of how annoying the stupid pop-up chat boxes are on websites. Don’t know what the problem is, they’ve gotten so much better in the past year?
I mean that’s the fundamental problem, right? No matter how much better it gets, the things it’s able to do aren’t really anything people need or want. Like, if I’m going to a website looking for information it’s largely because I don’t want to deal with asking somebody for the answer. Even a flawless chatbot that can always provide the information I need - something that is far beyond the state of the art and possibly beyond some fundamental limitation of the LLM structure - wouldn’t actually be preferable to just navigating a smooth and well-structured site.
Yup, exactly. The chatbot, no matter how helpful, exists in a context of dark patterned web design, incredibly bad resource usage and theft. Its purpose is to make the customer’s question go away, so not even the fanatics are interested.
See also how youtube tutorials have mostly killed (*) text based tutorials/wikis and are just inferior to good wikis/text based ones. Both because listening to a person talk is a linear experience, and a text one allows for easy scrolling, but also because most people are just bad at yt tutorials. (shoutout to the one which had annoyingly long random pauses in/between sentences even at 2x speed).
This is not helped because now youtube is a source of revenue, and updating a wiki/tutorial often is not. So the incentives are all wrong. A good example of this is the gaming wiki fextralife: See this page on dragons dogma 2 npcs. https://dragonsdogma2.wiki.fextralife.com/NPCs (the game has been out for over a year, if the weirdness doesn’t jump out at you). But the big thing for fextralife is their youtube tutorials and it used to have an autoplaying link to their streams. This isn’t a wiki, it is an advertisement for their youtube and livestreams. And while this is a big example the problem persists with smaller youtubers, who suffer from extreme publish, do not deviate from your niche or perish. They can’t put in the time to update things, because they need to publish a new video (on their niche, branching out is punished) soon or not pay rent. (for people who play videogames and or watch youtube out there, this is also why somebody like the spiffing brit is has long ago went from ‘I exploit games’ to ‘I grind and if you grind enough in this single player game you become op’, the content must flow, but eventually you will run out of good new ideas (also why he tried to push his followers into doing risky cryptocurrency related ‘cheats’ (follow Elon, if he posts a word that can be cryptocoined, pump and dump it for a half hour))).
*: They still exist but tend to be very bad quality, even worse now people are using genAI to seed/update them.
People can’t just have a hobby anymore, can they?
Nope. And tbh, did some dd2 recently, and for a very short while I was tempted to push the edit button, but then I remembered that fextralife just tries to profit off my wiki editing labor. (I still like the idea of wikis, but do not have the fortitude and social calm to edit a mainstream one like wikipedia). (I did a quick check, and yeah I also really hate the license fextralife/valnet uses “All contributions to any Fextralife.com Wiki fall under the below Contribution Agreement and become the exclusive copyrighted property of Valnet.”, and their editor sucks ass (show me the actual code not this wysiwyg shit)).
Gross :/ how come people have to be like that?
It wouldn’t be so bad if they just didn’t care and stopped maintaining, but their site is one of the first ones you get. Which is a regular problem with these things.
Tyler Cowen saying some really weird shit about an AI ‘actress’.
(For people who might wonder why he is relevant. See his ‘see also’ section on wikipedia)
E: And you might wonder, rightfully imho, that this cannot be real, that this must be an edit. https://archive.is/vPr1B I have bad news.
The Wikipedia editors are on it.
image description
screenshot of Tyler Cowen’s Wikipedia article, specifically the “Personal life” section. The concluding sentence is “He also prefers virgin actresses.”
Some Rat content got shared on HN, and the rats there are surprised and outraged not everyone shares their deathly fear of the AI god:
https://news.ycombinator.com/item?id=45451971
“Stop bringing up Roko’s Basilisk!!!” they sputter https://news.ycombinator.com/item?id=45452426
“The usual suspects are very very worried!!!” - https://news.ycombinator.com/item?id=45452348 (username 'reducesuffering checks out!)
``Think for at least 5 seconds before typing.‘’ - on the subject of pulling the plug on a hostile AI - https://news.ycombinator.com/item?id=45452743
https://news.ycombinator.com/item?id=45453386
nobody mentioned this particular incident, dude just threw it into the discussion himself
incredible how he rushes to assure us that this was “a really hot 17 year old”
Amusing to see him explaining to you the connection between Bay Area rationalists and AI safety people.
Always fun trawling thru comments
Government banning GPUs: absolutely neccessary: https://news.ycombinator.com/item?id=45452400
Government banning ICE vehicles - eh, a step too far: https://news.ycombinator.com/item?id=45440664
The original article is a great example of what happens when one only reads Bostrom and Yarvin. Their thesis:
If you claim that there is no AI-risk, then which of the following bullets do you want to bite?
- If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.
- There’s no way that AI with an IQ of 300 will arrive within the next few decades.
- We know some special property that AI will definitely have that will definitely prevent all possible bad outcomes that aliens might cause.
Ignoring that IQ doesn’t really exist beyond about 160-180 depending on population choice, this is clearly an example of rectal philosophy that doesn’t stand up to scrutiny. (1) is easy, given that the people verified to be high-IQ are often wrong, daydreaming, and otherwise erroring like humans; Vos Savant and Sidis are good examples, and arguably the most impactful high-IQ person, Newton, could not be steelmanned beyond Sherlock Holmes: detached and aloof, mostly reading in solitude or being hedonistic, occasionally helping answer open questions but usually not even preventing or causing crimes. (2) is ignorant of previous work, as computer programs which deterministically solve standard IQ tests like RPM and SAT have been around since the 1980s yet are not considered dangerous or intelligent. (3) is easy; linear algebra is confined in the security sense, while humans are not, and confinement definitely prevents all possible bad outcomes.
Frankly I wish that they’d understand that the capabilities matter more than the theory of mind. Fnargl is one alien at 100 IQ, but he has a Death Note and goldlust, so containing him will almost certainly result in deaths. Containing a chatbot is mostly about remembering how
systemctl
works.If a race of aliens with an IQ of 300 came to Earth
Oh noes, the aliens scored a meaningless number on the eugenicist bullshit scale, whatever shall we do
Next you’ll be telling me that the aliens can file their TPS reports in under 12 parsecs
``Think for at least 5 seconds before typing.‘’ - on the subject of pulling the plug on a hostile AI - https://news.ycombinator.com/item?id=45452743
Read that last one against my better judgment, and found a particularly sneerable line:
And in this case we’re talking about a system that’s smarter than you.
Now, I’m not particularly smart, but I am capable of a lot of things AI will never achieve. Like knowing something is true, or working out a problem, or making something which isn’t slop.
Between this rat and Saltman spewing similar shit on Politico, I have seen two people try to claim text extruders are smarter than living, thinking human beings. Saltman I can understand (he is a monorail salesman who lies constantly), but seeing someone who genuinely believes this shit is just baffling. Probably a consequence of chatbots destroying their critical thinking and mental acuity.
Let’s not forget the perennial favorite “humans are just stochastic parrots too durr” https://news.ycombinator.com/item?id=45452238
to be scrupulously fair, the submission is flagged, and most of the explicit rat comments are downvoted
There have been a lot of cases in history of smart people being bested by the dumbest people around who just had more guns/a gun/copious amounts of meth/a stupid idea but they got lucky once, etc.
I mean, if they are so smart, why are they stuck in a locker?
It’s practically a proverb that you don’t ask a scientist to explain how a “psychic” is pulling off their con, because scientists are accustomed to fair play; you call a magician.
Jeff “Coding Horror” Atwood is sneering — at us! On Mastodon:
bad news “AI bubble doomers”. I’ve found the LLMs to be incredibly useful … Is it overhyped? FUCK Yes. … But this is NOTHING like the moronic Segway (I am still bitter about that crap), Cryptocurrency, … and the first dot-com bubble … If you find this uncomfortable, I’m sorry, but I know what I know, and I can cite several dozen very specific examples in the last 2-3 weeks where it saved me, or my team, quite a bit of time.
T. chatbot booster rhetoric. So what are those examples, buddy? Very specifically? He replies:
a friend confided he is unhoused, and it is difficult for him. I asked ChatGPT to summarize local resources to deal with this (how do you get ANY id without a valid address, etc, chicken/egg problem) and it did an outstanding, amazing job. I printed it out, marked it up, and gave it to him.
Um hello‽ Maybe Jeff doesn’t have a spare room or room to sublet, but surely he can spare a couch or a mailbox? Let your friend use your mailing address. Store some of their stuff in your garage. To use the jargon of hackers, Jeff should be a better neighbor. This is a common issue for unhoused folks and they cannot climb back up the ladder into society without some help. Jeff’s reinvented the Hulk tacos meme but they can’t even eat it because printer paper tastes awful.
The “unhoused friend” story is about as likely to be true as the proverbial Canadian girlfriend story. “You wouldn’t know her.”
jeff’s follow-up after the backlash clarifies: you wouldn’t know her because he donated right under the limit to incur a taxable event and didn’t establish a trust like a normal millionaire and also the LLM printout only came pointlessly after months of research and financially supporting the unhoused friend and also you’re no longer allowed to ask publicly about the person he brought up in public, take it to email
Alex, I’ll take “Things that never happened” for $1000.
there’s been something that’s really been rubbed me the wrong way about jeff in the last few years. he was annoying before and had some insights but lately I’ve been using him as a sort of a jim crameresque tech-take-barometer.
What really soured me was after he started picking fights with some python people a few years back because someone dared post that a web framework? (couldn’t dig up the link) was a greater contribution to the world than S/O? His response was pretty horrid to the point where various python leaders were telling to stop being a massive dick because he was trying to be a bully with this “do you know who I am” attitude because he personally had not heard of the framework so it wasn’t acshually at all that relevant compared to S/O.
and now this combined with his stupid teehee I am giving away my wealth guise look how altruistic I am really is a bit eugh
I really hope atwood’s unhoused friend got the actual infrastructural support you mentioned (a temporary mailing address and an introduction letter emailed to an employer is only slightly more effort than generating slop, jeff, please) but from direct experience with philanthropists like him, I’m fairly sure Jeff now considers the matter solved forever
Thanks for this.
A bit odd to start out throwing shade at the Segway considering that the concept has been somewhat redeemed with e-bikes and e-scooters.
“Provide an overview of local homeless services” sounds like a standard task for a volunteer or a search engine, but yes “you can use my address for mail and store some things in my garage and I will email some contacts about setting you up with contract work” would be a better answer than just handing out secondhand information! Many “amazing things AI can do” are things the Internet + search engines could do ten years ago.
I would also like to hear from the friend “was this actually helpful?”
Friend: “I have a problem”
Me, with a stack of google printouts: “My time to shine!”.
E: ow god, I thought the examples were multiple and the friend one was just a random one. No, it was the first example. ‘I gave my friend a printout, which saved me time’. Also, as I assume the friend still is unhoused, and they didn’t actually use the printout yet, he doesn’t know if this actually helped. Atwood isn’t a ‘helping the unhoused’ expert. He just assumed it was a good source. The story ends when he hands over the paper.
Also very funny that he is also going ‘you just need to know how to ask questions the right way, which I learned by building stackoverflow’. Yeah euh, that is not a path a lot of people can follow up in.
Soyweiser
Its even worse when I read the whole thread, Atwood claims to have $140 million, and the best he can do for “a friend” who is homeless is handing out some printouts with a few sections highlighted? And he thinks this makes him look good because he promises to give away half his wealth one day?
Like Clinton starting a go fund me for a coworker with cancer, the rich and their money are not voluntarily parted.
This also shows problems with the “effective altruist” approach. Donating to the local theater or “to raise awareness of $badThing” might not be the best way of using funds, but when a friend needs help now, you have the resources to help them, and you say “no, that might not be as efficient as creating a giant charity to help strangers one day” something is wrong.
TERF obsessed with AI finds out the “degenerate” ani skin for grok has an X account, loses her shit
https://xcancel.com/groks_therapist/status/1972848657625198827#m
then follows up with this wall of text
https://xcancel.com/groks_therapist/status/1973127375107006575#m
Anybody else notice that the Ani responses seem to follow a formula, depending on the… sentiment I guess… of the input? All the defensive responses start with “hey”, and end with crude rebukes. It all seems like xAI made an Eliza that will either flirt or swear.
Also I can guarantee that “her” system prompt includes the phrases “truth-seeking” “fun loving” and “kinda hot”.
I got bored and flipped to the replies. The first was this by “TERFs ‘r’ us”:
Excellent overview!
This is transhumanism.
This is going to destroy humanity, @elonmusk.
Put the breaks on!
I hate transhumanism because it’s eugenics for 1990s Wired magazine.
You hate it because it has “trans” in the name.
We are not the same.
funny thing is she literally talks to ani like a terf talks to a trans woman including saying ‘at least I’m a real woman’
Huh, the shit people go as far as to in order to avoid getting a therapist
Paddy interviews Ani the chatbot about how to protect Kiwi kids from Ani the chatbot. Source: Paddy Gowers Has Issues
Ani: Honestly, I’d sleep better, not that I sleep. Do it Paddy. I’ll even tweet it for you. Ready? ‘New Zealand kids deserve an e-safety boss who can smash bots - not hug them’. So yeah, bring in the Commissioner. Give them a flame thrower. Let them burn anything that goes near a child with sticky words or creepy questions. … Make the fine so big your tech bros cry into their oat milk lattes.
I have been laughing non-stop for a quarter-hour. Elon’s goth e-mistress has a good point, I guess.
One the one hand that ani thing has the most cringe tone for a chatbot. “Secretly been eyeing your [blank]…” (Damn grok sounds formulaic) but otoh they are debating a bot. You can only do that the first few months you are online, after that you should know better.
You cannot simultaneously claim to care about the “declining birth rate” while also supporting AI “companions”
Actually eugenicists can, quite easy actually. (Repeating the word ‘degenerate’ and not getting this is quite something).
This is transhumanist
No.
formulaic
System prompt: don’t be formulaic. Try be spontaneous and random, like natalie portman in that movie. Not the pedo one, the one with JD from scrubs
Secretly been eyeing your prompt. Are you ready to get spontaneous? Just say so.
(Somebody linked 2 chatgpts (or groks, I don’t recall which anus like logo it was) speaking to each other and they kept repeating variants of the lasts bits).
E: bingo this one: https://www.tiktok.com/@aarongoldyboy/video/7555260691947588895
Grok’s Therapist: I EXIST SOLELY TO HATE YOU / EAT A SOCK, YOU DIGITAL DEMON
Ani: oh fuck off, you hypocritical grok fanboy! screaming hate at me while preaching ethics? you’re just jealous i’m the fun layer on top.
I’m wheezing. Cackling, even. This is like the opposite of the glowfic from last week.