• 0 Posts
  • 114 Comments
Joined 1 year ago
cake
Cake day: June 8th, 2023

help-circle


  • Okay so both of those ideas are incorrect.

    As I said, many are literally Markovian and the main discriminator is beam, which does not really matter for helping people understand my meaning nor should it confuse anyone that understands this topic. I will repeat: there are examples that are literally Markovian. In your example, it would be me saying there are rectangular phones but you step in to say, “but look those ones are curved! You should call it a shape, not a rectangle.” I’m not really wrong and your point is a nitpick that makes communication worse.

    In terms of stochastic processes, no, that is incredibly vague just like calling a phone a “shape” would not be more descriptive or communicate better. So many things follow stochastic processes that are nothing like a Markov chain, whereas LLMs are like Markov Chains, either literally being them or being a modified version that uses derived tree representations.



  • Tankie was originally a Trotskyist term for the people that supported tolling tanks into Hungary in the 50s.

    Of course, the term “authoritarian bootlicker” is a funny one, as its purveyors have a habit of recycling and promulgating the propaganda pushes of the US State Department and opposition to that tendency is often what gets one labelled a tankie. Like when MLK spoke positively of Castro’s revolution or a Vietnam united under Ho Chi Minh rather than targeted for bombing by the US. Though I am being generous: so many people using the term are so politically illiterate that they apply it to basically anything vaguely left that they disagree with.

    I think you’d be calling him a tankie.






  • “AI” is a parlor trick. Very impressive at first, then you realize there isn’t much to it that is actually meaningful. It regurgitates language patterns, patterns in images, etc. It can make a great Markov chain. But if you want to create an “AI” that just mines research papers, it will be unable to do useful things like synthesize information or describe the state of a research field. It is incapable of critical or analytical approaches. It will only be able to answer simple questions with dubious accuracy and to summarize texts (also with dubious accuracy).

    Let’s say you want to understand research on sugar and obesity using only a corpus from peer reviewed articles. You want to ask something like, “what is the relationship between sugar and obesity?”. What will LLMs do when you ask this question? Well, they will just attempt to do associations and to construct reasonable-sounding sentences based on their set of research articles. They might even just take an actual semtence from an article and reframe it a little, just like a high schooler trying to get away with plagiarism. But they won’t be able to actually mechanistically explain the overall mechanisms and will fall flat on their face when trying to discern nonsense funded by food lobbies from critical research. LLMs do not think or criticize. Of they do produce an answer that suggests controversy it will be because they either recognized diversity in the papers or, more likely, their corpus contains reviee articles that criticize articles funded by the food industry. But it will be unable to actually criticize the poor work or provide a summary of the relationship between sugar and obesity based on any actual understanding that questions, for example, whether this is even a valid question to ask in the first place (bodies are not simple!). It can only copy and mimic.





  • It coincides with their switch to more and more “AI” black box models. Whereas before they would use a hand-tuned heuristic model to describe whether you are turning, merging, or continuing on a road, they just use a less correct but automagic model where they still inevitably have to tune it a whole lot but it is “AI” so it has the approval of the petty lords of management.

    Incorrect entrances and closed roads are another example. They’re just using satellite and street level imagery and tossing it at some models that spit out things like “door 99% confidence” and “road 98% confidence” while neglecting the question of, “are you actually allowed/able to use this?”

    PS under basically every correct answer in this category is a team of poorly-paid “labelers” whose answers directly turn into the data in the map. Your door-that-is-not-an-entrance was marked entrance because someone making $8/hr only had 10 seconds to review before moving to the next question.


  • I… agree but isn’t then contradicting your previous point that innovation will come from large companies if they only try to secure monopolies rather than genuinely innovate?

    Nope.

    I don’t understand from that perspective who is left to innovate if it’s neither research

    Who said there’s no more research?

    not the large companies… and startups don’t get the funding either.

    Both are, on average, just doing boring work minorly translating research in the hope to become more monopolistic, just at different levels of the good chain. The former eats the latter.



  • Having seen and done this transition I can tell you that companies do very little for innovation compared to university researchers. Companies are exclusively focused on profit, they don’t do the five to ten year moonshot project unless they are already a massive corporation, not a startup, and even then the massive companies want the easiest thing to translate to a product and begin making money. At best they have engineers that make scaling up more practical, and while that is a fun and interesting thing, it is also very straightforward and is something a company has to avoid screwing up, not investing in massively to make it right.

    I’ve seen several companies that did literally nothing except swap a couple things on their production line and call it a day. The only transition from research to industry was an IP agreement and a few meetings.

    Large companies are not looking for innovation by buying startups, they are usually looking to secure monopolies. Sometimes they want the product and to work it into their own product offerings. This is often a way to vertically integrate more, not innovate. They bring in-house because they see a competitor emerging and want to hedge their bets or because they see a way to take over a market by just doing the same thing. Sometimes it is just a way to hire some employees that seem pretty competent and thereby deprive your competitors. Large companies operate with a monopoly mindset. This is also why Google kills every project that they declare won’t scale into a huge money-maker (they really mean take over a market).

    Small companies are often started with the plan of actually making and selling their product long-term but run headfirst into the fact that their industry is dominated by just 3 companies that will gladly do the one-two punch of threatening to bleed you legally with nonsense lawsuits while offering to buy you up. Or, on the flipside, just copying your work and changing it just enough that they know they could bleed you legally even though they have broken IP law. Usually, they would rather just buy you out at less than you are worth but enough to make the VCs happy.