wttr gang
wttr gang
I believe your “They use attention mechanisms to figure out which parts of the text are important” is just a restatement of my “break it into contextual chunks”, no?
Large language models literally do subspace projections on text to break it into contextual chunks, and then memorize the chunks. That’s how they’re defined.
Source: the paper that defined the transformer architecture and formulas for large language models, which has been cited in academic sources 85,000 times alone https://arxiv.org/abs/1706.03762
Piped link for the same video: https://piped.kavin.rocks/watch?v=6XAln91Bs6k
You can look into Spotify Downloader spotdl
, a Python package here: https://github.com/spotDL/spotify-downloader
It doesn’t download as you listen, but it’ll do something smart and download all the tracks of a playlist/album/etc by grabbing high-quality audio from Youtube videos (and it magically avoids dreaded music video versions) if you feed it a Spotify URL. It also puts all the metadata in the tracks automatically.
I don’t have a very consistent naming theme. I’ve used various names related to music, science, and art. I have a decomissioned machine named “numbers” for example.
However, I would like to point out we have plenty more than 8 celestial bodies of interest in the solar system if you include Eris, Ceres, Pluto, Makemake, the moons of Jupiter, and more. It might not be indefinitely extendable, but may help in the short term.
Definitely RE for me. I couldn’t sleep after the first time I saw a crimson head. The sharks were terrifying too