The year 2000 was peak human technology. It’s been downhill in every way since, until generative AI - which is f’in amazing. But let’s be real, the future belongs to the bots.
The year 2000 was peak human technology. It’s been downhill in every way since, until generative AI - which is f’in amazing. But let’s be real, the future belongs to the bots.
I appreciate one of the most concise explanations of that perspective I’ve ever read! This is actually the one I’d like to believe, but not the one I do. I disagree with the idea that “both sides are the same,” but I won’t go so far as to imagine Democrats are truly concerned with integrity to the degree that they’d sacrifices strategy. I’m afraid they’re just people, and people are all fucking stupid in their own way. It’s just some are fucking stupid and malicious.
You believe it’s the Dem’s that do the sabotaging, and that they are compromising to…themselves? Interesting.
Do you not believe they’re preparing an amendment, or do you not believe it will pass?
I like to think about the spacefaring AI (or cyborgs, if we’re lucky) that will inevitably do this stuff in our stead, assuming we don’t strangle them in the cradle.
/stares in smart glasses
I do love that generative AI is getting better at imagining text.
If our present reality was described accurately in a novel written in 1985, it would be firmly cyberpunk.
WebP is a raster graphics file format developed by Google intended as a replacement for JPEG, PNG, and GIF file formats. It supports both lossy and lossless compression, as well as animation and alpha transparency. Google announced the WebP format in September 2010, and released the first stable version of its supporting library in April 2018.
The format has spotty support across applications and some vulnerabilities were discovered that required patch efforts last year. It’s not clear why you should do anything.
deleted by creator
This is another good use case for gAI. Copy/paste the comment into a GPT and tell it to re-write the content at the desired reading or technical level. Then it’s available for follow-up clarification questions.
That’s how I communicate my intention to pay a parking ticket. “Bowing to regulatory pressure”
Humans are really bad at determining whether a chat is with a human or a bot
Eliza is not indistinguishable from a human at 22%.
Passing the Turing test stood largely out of reach for 70 years precisely because Humans are pretty good at spotting counterfeit humans.
This is a monumental achievement.
Sure, thanks for your interest. It’s an incomplete picture, but we can think of LLMs as an abstraction of all the meaningful connections within a dataset to a higher dimensional space - one that can be explored. That alone is an insane accomplishment that is changing some of the pillars of data analysis and knowledge work. But that’s just the contribution of the “Attention is All You Need” paper. Many implementations of modern generative AI combine LLM inference in agentic networks, with GANs, and with rules-based processing. Extracting connections is just one part of one part of a modern AI implementation.
The emergent properties of GPT4 are enough to point toward this exponential curve continuing. Theory of mind (and therefore deception) as well as relational spatial awareness (usually illustrated with stacking problems) developed solely from increasing the parameter count describing the neural network. These were unexpected capabilities. As a result, there is an almost literal arms race on the hardware side to see what other emergent properties exist at higher model sizes. With some poetic license, we’re rending function from form so quickly and effectively that it’s seen by some as freeing and others as a sacrilege.
Some of the most interesting work on why these capabilities emerge and how we might gain some insight (and control) from exploring the mechanisms is being done by Anthropic and by users at Hugging Face. They discovered that when specific neurons in Claude’s net are stimulated, everything it responds with will in some way become about the Golden Gate Bridge, for instance. This sort of probing is perhaps a better route to progress than blindly chasing more size (despite its recent success). But only time will tell. Certainly, Google and MS have had a lot of unforced errors fumbling over themselves to stay in what they think is the race.
I’m happy to take the time to alter your perspective, if you are open to new information.
I had no idea emo ducks admired humanity like that. Imma try and be better for y’all, bring that good bread. Wait, is bread bad for you now? I think I saw that in my feed while doom scrolling.
As long as no one messes with their open source contributions… (ditto for MS)
To the one person who upvoted this: We should be friends.
Honestly, I’d get on-board with just about anytime 2000 to 2010. The enshittification of the internet and social-media-driven comment culture didn’t start in earnest until smart phones took off.