This seems plausible to me.
They should make it smarter and group users based on their swipe technique and have a separate model for each group.
This seems plausible to me.
They should make it smarter and group users based on their swipe technique and have a separate model for each group.
It does feel this way… but why?! Shouldn’t it be getting better with more training data? Is other people’s shitty typing data fucking up my experience?
Or maybe my swiping technique is getting lazy? I’m not sure, but it definitely feels worse than a few years ago.
Why would you want to use beta versions of WhatsApp? Genuinely curious.
But also the LAPD is much less likely to respond to crimes in poorer areas. So the numbers they report aren’t all that meaningful.
NVENC has a slow preset:
https://docs.nvidia.com/video-technologies/video-codec-sdk/12.0/ffmpeg-with-nvidia-gpu/index.html#command-line-for-latency-tolerant-high-quality-transcoding
As they expand the NVENC options that are exposed on the command line, is it getting closer to CPU-encoding level of quality?