I’ve been using Kagi. It works well. I like it. Costs money, but that’s a positive in my book.
I’ve been using Kagi. It works well. I like it. Costs money, but that’s a positive in my book.
This one I can really get behind
Not really an issue. If you want to see this content from defederated instances that everyone else finds obnoxious or disruptive, then you can either browse from an instance that doesn’t defederate that content, or spin up your own personal instance to browse from. It’s easy to move to a different instance. Your choice.
I see this complaint a lot but honestly I don’t quite understand what the big deal is. Not everyone is subscribed to the same communities. Personally, I’d love a feature on kbin/lemmy that rolled up duplicate posts on the client, but it’s really not that annoying for me to see a couple dupes in my feed if they’re posted in relevant communities /shrug
In my experience, this has always been a problem after a forum grows beyond a certain size. It’s not really a Reddit-exclusive thing. It’s also not related to karma/reputation-tracking, IMO.
Early adopters of a small, somewhat empty community are people who want to grow the community and encourage posting. Discussion is bright and careful in certain ways because it’s usually just a few commenters interacting with each other who all want the same thing.
Once a community grows big enough to support lurkers and a variety of topics, with multifaceted discussion happening naturally, you have a familiar effect happen: you know how people are disproportionately more likely to review a product or business if they had a negative experience than a positive one? Well, in a similar way, when there’s enough content to lurk (and not be one of the early enthusiasts who post in spite of a lack of content, as a duty to help the community grow), then lurkers are more likely to come out of the woodwork and join a discussion when they see something they disagree with or feel strongly about.
Honestly, though, it has a few silver linings. I grew up learning a lot from arguments online in various places. Sometimes they are handled well and sometimes they are handled poorly by the participants. Learn from both. It’s great to see two sides of an issue, even a petty one. It can teach you a ton about how to behave well, how to actually persuade someone on a topic, and how to avoid conflict in the first place. It can also teach you about a controversial topic you knew little about, and spark your curiosity to learn more (if only to refute something with citations) and sometimes change your opinion altogether.
The healthy/toxic dichotomy starts in your own mind. You can’t control others, but you can control yourself. So find those little positive nuggets where you can.
Hey, this is excellent. I was looking to do something like this a few months ago. Bought a few ESP devices to mess with, but never got around to it. I might try it out now, though, using your guide. Thank you!
You’ve misunderstood me. None of those things are what that commenter is referring to. It’s not about improving another energy storage technology by using superconductors, it’s about having a room temperature, ambient pressure version of an existing technology that we already use superconductors for.
I think what they’re referring to is the idea that superconductors can trap current effectively indefinitely; more like replacing a battery with a capacitor than enhancing existing battery chemistry.
Got a source? When I first read about this people were cautiously optimistic partly because the head researcher was well-respected.
our compound shows greatly consistent x-ray diffraction spectrum with the previously reported structure data
Uhh, doesn’t look like it to me. This paper’s X-ray diffraction spectrum looks pretty noisy compared to the one from the original paper, with some clear additional/different peaks in certain regions. That could potentially affect the result. I was under the impression from the original paper that a subtle compression of the lattice structure was pretty important to formation of quantum wells for superconductivity, so if the X-ray diff isn’t spot on I’ll wait for some more failures before calling it busted.
This is a really terrific explanation. The author puts some very technical concepts into accessible terms, but not so far from reality as to cloud the original concepts. Most other attempts I’ve seen at explaining LLMs or any other NN-based pop tech are either waaaay oversimplified, heavily abstracted, or are meant for a technical audience and are dry and opaque. I’m saving this for sure. Great read.
Fair enough!
I’m not saying this to be an asshole, because I’m happy that you got to the right conclusion eventually, but I have to clarify for history’s sake: if you thought Trump was playing 4D chess in 2015-2016 then you were being duped. Most of us understood what he was from the get-go. Claims of 4D chess have always been stupid.
Again, I’m happy that you figured it out. Everyone makes mistakes. But “we” didn’t think he was playing 4D chess. The hypothesis about Musk/Twitter above is hardly the same.
I honestly only made it a few minutes in, and there is probably plenty of merit to the rest of her perspective. But… I just couldn’t get past the “AI doesn’t exist” part. I get that you don’t know or care about the difference and you associate the term “AI” with sci-fi-like artificial sentience/AGI, but “AI” has been used for decades to refer to things that mimic intelligence, not just full-on artificial general intelligence. Algorithms governing NPC behavior and pathfinding in video games is AI, and that’s a perfectly accurate description. SmarterChild was AI… even ELIZA was AI. Stuff like GAN models and LLMs are certainly AI. The goal posts for “intelligence” have moved farther and farther back with every innovation. The AI we have now was fantasy just 20 years ago. Even just five years ago, to most people.
That’s not really how LLMs work. You’re basically describing Markov chains. The statement “It’s just a statistical prediction model with billions of parameters” also applies to the human brain. An LLM is much more of a black box than you’re implying.
I’m in my early 30s and I learned metric pretty thoroughly as early as elementary school. Grew up in Massachusetts and went to public school, for what it’s worth.
This really is true. Experiencing it now, myself.
Oh, interesting! Thanks for pointing that out. Side note: entries… I hope kbin adopts better language for what to call Reddit-like posts (articles), Twitter-like microblog posts (posts), and comments (entries?). I never would have guessed entries == comments. Maybe this is ActivityPub-specific naming? It reminds me of a past job where we surfaced internal technical names as the names of products and features… it just confused customers.
So is your comment. And mine. What do you think our brains do? Magic?
edit: This may sound inflammatory but I mean no offense