laborcontract a day ago

General overview below, as the pages don't seem to be working well

  Llama 4 Models:
  - Both Llama 4 Scout and Llama 4 Maverick use a Mixture-of-Experts (MoE) design with 17B active parameters each.
  - They are natively multimodal: text + image input, text-only output.
  - Key achievements include industry-leading context lengths, strong coding/reasoning performance, and improved multilingual capabilities.
  - Knowledge cutoff: August 2024.

  Llama 4 Scout:
  - 17B active parameters, 16 experts, 109B total.
  - Fits on a single H100 GPU (INT4-quantized).
  - 10M token context window
  - Outperforms previous Llama releases on multimodal tasks while being more resource-friendly.
  - Employs iRoPE architecture for efficient long-context attention.
  - Tested with up to 8 images per prompt.

  Llama 4 Maverick:
  - 17B active parameters, 128 experts, 400B total.
  - 1M token context window.
  - Not single-GPU; runs on one H100 DGX host or can be distributed for greater efficiency.
  - Outperforms GPT-4o and Gemini 2.0 Flash on coding, reasoning, and multilingual tests at a competitive cost.
  - Maintains strong image understanding and grounded reasoning ability.

  Llama 4 Behemoth (Preview):
  - 288B active parameters, 16 experts, nearly 2T total.
  - Still in training; not yet released.
  - Exceeds GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro on STEM benchmarks (e.g., MATH-500, GPQA Diamond).
  - Serves as the “teacher” model for Scout and Maverick via co-distillation.

  Misc:
  - MoE Architecture: Only 17B parameters activated per token, reducing inference cost.
  - Native Multimodality: Unified text + vision encoder, pre-trained on large-scale unlabeled data.
  • InvOfSmallC a day ago

    For a super ignorant person:

    Both Llama 4 Scout and Llama 4 Maverick use a Mixture-of-Experts (MoE) design with 17B active parameters each

    Those experts are LLM trained on specific tasks or what?

    • vessenes a day ago

      This was an idea that sounded somewhat silly until it was shown it worked. The idea is that you encourage through training a bunch of “experts” to diversify and “get good” at different things. These experts are say 1/10 to 1/100 of your model size if it were a dense model. So you pack them all up into one model, and you add a layer or a few layers that have the job of picking which small expert model is best for your given token input, route it to that small expert, and voila — you’ve turned a full run through the dense parameters into a quick run through a router and then a 1/10 as long run through a little model. How do you get a “picker” that’s good? Well, it’s differentiable, and all we have in ML is a hammer — so, just do gradient descent on the decider while training the experts!

      This generally works well, although there are lots and lots of caveats. But it is (mostly) a free lunch, or at least a discounted lunch. I haven’t seen a ton of analysis on what different experts end up doing, but I believe it’s widely agreed that they tend to specialize. Those specializations (especially if you have a small number of experts) may be pretty esoteric / dense in their own right.

      Anthropic’s interpretability team would be the ones to give a really high quality look, but I don’t think any of Anthropic’s current models are MoE.

      Anecdotally, I feel MoE models sometimes exhibit slightly less “deep” thinking, but I might just be biased towards more weights. And they are undeniably faster and better per second of clock time, GPU time, memory or bandwidth usage — on all of these - than dense models with similar training regimes.

      • zamadatix a day ago

        The only thing about this which may be unintuitive from the name is an "Expert" is not something like a sub-llm that's good at math and gets called when you ask a math question. Models like this have layers of networks they run tokens through and each layer is composed of 256 sub-networks, any of which can be selected (or multiple selected and merged in some way) for each layer independently.

        So the net result is the same: sets of parameters in the model are specialized and selected for certain inputs. It's just a done a bit deeper in the model than one may assume.

        • jimmyl02 a day ago

          the most unintuitive part is that from my understanding, individual tokens are routed to different experts. this is hard to comprehend with "experts" as that means two you can have different experts for two sequential tokens right?

          I think where MoE is misleading is that the experts aren't what we would call "experts" in the normal world but rather they are experts for a specific token. that concept feels difficult to grasp.

          • phire 21 hours ago

            It's not even per token. The routing happens once per layer, with the same token bouncing between layers.

            It's more of a performance optimization than anything else, improving memory liquidity. Except it's not an optimization for running the model locally (where you only run a single query at a time, and it would be nice to keep the weights on the disk until they are relevant).

            It's a performance optimization for large deployments with thousands of GPUs answering tens of thousands of queries per second. They put thousands of queries into a single batch and run them in parallel. After each layer, the queries are re-routed to the GPU holding the correct subset of weights. Individual queries will bounce across dozens of GPUs per token, distributing load.

            Even though the name "expert" implies they should experts in a given topic, it's really not true. During training, they optimize for making the load distribute evenly, nothing else.

            • phire 18 hours ago

              BTW, I'd love to see a large model designed from scratch for efficient local inference on low-memory devices.

              While current MoE implementations are tuned for load-balancing over large pools of GPUs, there is nothing stopping you tuning them to only switch expert once or twice per token, and ideally keep the same weights across multiple tokens.

              Well, nothing stopping you, but there is the question of if it will actually produce a worthwhile model.

              • regularfry 9 hours ago

                Intuitively it feels like there ought to be significant similarities between expert layers because there are fundamentals about processing the stream of tokens that must be shared just from the geometry of the problem. If that's true, then identifying a common abstract base "expert" then specialising the individuals as low-rank adaptations on top of that base would mean you could save a lot of VRAM and expert-swapping. But it might mean you need to train from the start with that structure, rather than it being something you can distil to.

              • boroboro4 14 hours ago

                DeepSeek introduced novel experts training technique which increased experts specialization. For particular given domain their implementation tends to activate same experts between different tokens, which is kinda what you’re asking for!

            • mentalgear 13 hours ago

              So a more correct term would be "Distributed Loading" instead of MoE.

            • igravious 9 hours ago

              > making the load distribute evenly, nothing else.

              so you mean a "load balancer" for neural nets … well, why don't they call it that then?

              • lxgr 6 hours ago

                Some load balancers are also routers (if they route based on service capability and not just instantaneous availability) or vice versa, but this kind isn't always, to my understanding: The experts aren't necessarily "idle" or "busy" at any given time (they're just functions to be invoked, i.e. generally data, not computing resources), but rather more or less likely to answer correctly.

                Even in the single GPU case, this still saves compute over the non-MoE case.

                I believe it's also possible to split experts across regions of heterogeneous memory, in which case this task really would be something like load balancing (but still based on "expertise", not instantaneous expert availability, so "router" still seems more correct in that regard.)

          • bonoboTP a day ago

            Also note that MoE is a decades old term, predating deep learning. It's not supposed to be interpreted literally.

          • wrs 5 hours ago

            ML folks tend to invent fanciful metaphorical terms for things. Another example is “attention”. I’m expecting to see a paper “consciousness is all you need” where “consciousness” turns out to just be a Laplace transform or something.

          • tomp a day ago

            > individual tokens are routed to different experts

            that was AFAIK (not an expert! lol) the traditional approach

            but judging by the chart on LLaMa4 blog post, now they're interleaving MoE models and dense Attention layers; so I guess this means that even a single token could be routed through different experts at every single MoE layer!

        • klipt a day ago

          So really it's just utilizing sparse subnetworks - more like the human brain.

      • MoonGhost 2 hours ago

        > Anecdotally, I feel MoE models sometimes exhibit slightly less “deep” thinking

        Makes sense to compare apples with apples. Same compute amount, right? Or you are giving less time to MoE model and then feel like it underperforms. Shouldn't be surprising...

        > These experts are say 1/10 to 1/100 of your model size if it were a dense model

        Just to be correct, each layer (attention + fully connected) has it's own router and experts. There are usually 30++ layers. It can't be 1/10 per expert as there are literally hundreds of them.

      • philsnow a day ago

        The idea has also been around for at least 15 years; "ensemble learning" was a topic in my "Data Mining" textbook from around then.

        Meta calls these individually smaller/weaker models "experts" but I've also heard them referred to as "bozos", because each is not particularly good at anything and it's only together that they are useful. Also bozos has better alliteration with boosting and bagging, two terms that are commonly used in ensemble learning.

        • lordswork 20 hours ago

          MOE as an idea specific to neural networks has been around since 1991[1] . OP is probably aware, but adding for others following along, while MoE has roots in ensembling, there are some important differences: Traditional ensembles run all models in parallel and combine their outputs, whereas MoE uses a gating mechanism to activate only a subset of experts per input. This enables efficient scaling via conditional computation and expert specialization, rather than redundancy.

          [1]:https://ieeexplore.ieee.org/document/6797059

      • Buttons840 a day ago

        If I have 5000 documents about A, and 5000 documents about B, do we know whether it's better to train one large model on all 10,000 documents, or to train 2 different specialist models and then combine them as you describe?

        • MoonGhost 2 hours ago

          It probably depends on how much A and B overlap. If it's say English sci-fi and Chinese poetry two different models may be better.

        • vessenes a day ago

          well you don't. but the power of gradient descent if properly managed will split them up for you. But you might get more mileage out of like 200 specialist models.

      • faraaz98 a day ago

        I've been calling for this approach for a while. It's kinda similar to how the human brain has areas that are good at specific tasks

        • usef- a day ago

          It's already used a lot — the paper I believe is from 1991, and GPT4 among many others is MoE

      • mrbonner a day ago

        So this is kind of an ensemble sort of thing in ML like random forest and GBT?

      • randomcatuser a day ago

        yes, and it's on a per-layer basis, I think!

        So if the model has 16 transformer layers to go through on a forward pass, and each layer, it gets to pick between 16 different choices, that's like 16^16 possible expert combinations!

      • tomjen3 15 hours ago

        Cool. Those that mean I could just run the query through the router and then load only the required expert? That is could I feasibly run this on my Macbook?

    • chaorace a day ago

      The "Experts" in MoE is less like a panel of doctors and more like having different brain regions with interlinked yet specialized functions.

      The models get trained largely the same way as non-MoE models, except with specific parts of the model silo'd apart past a certain layer. The shared part of the model, prior to the splitting, is the "router". The router learns how to route as an AI would, so it's basically a black-box in terms of whatever internal structure emerges from this.

    • pornel a day ago

      No, it's more like sharding of parameters. There's no understandable distinction between the experts.

      • vintermann 14 hours ago

        I understand they're only optimizing for load distribution, but have people been trying to disentangle what the the various experts learn?

        • pornel 9 hours ago

          Keep in mind that the "experts" are selected per layer, so it's not even a single expert selection you can correlate with a token, but an interplay of abstract features across many experts at many layers.

        • calaphos 12 hours ago

          Mixture of experts involves some trained router components which routes to specific experts depending on the input, but without any terms enforcing load distribution this tends to collapse during training where most information gets routed to just one or two experts.

    • brycethornton a day ago

      I believe Mixture-of-Experts is a way for a neural network to group certain knowledge into smaller subsets. AFAIK there isn't a specific grouping goal, the network just figures out what goes where on it's own and then when an inference request is made it determines what "expert" would have that knowledge and routes it there. This makes the inference process much more efficient.

  • qwertox a day ago

    Llama 4 Scout, Maximum context length: 10M tokens.

    This is a nice development.

    • lelandbatey a day ago

      Is the recall and reasoning equally good across the entirety of the 10M token window? Cause from what I've seen many of those window claims equate to more like a functional 1/10th or less context length.

      • vessenes a day ago

        It’s going to take a while to see how good this window is for real use; they’ve used a couple new ideas to get to 10M token context. Right now the only really good long token model out there is Gemini Pro - and its effectiveness does start dropping maybe in the 200k token range. I imagine insiders at GOOG have access to more than the published 1M token range there.

        It will be fun to see what we get here, but I have no doubt the extra tokens will be useful - lots of use cases can do almost as well with summary-level accuracy memory.

      • stitched2gethr 5 hours ago

        I very much agree. I've been using Gemini 2.5 pro for coding and I've always given it a simple instruction. Never write comments. It will stop writing them for a time but it's nowhere near the 1M context window.

        Now maybe this is more a lack of instruction following than context length but the fact that it works at first and then starts going downhill quickly makes me wary about how much it will pay attention to other details further back in the context.

      • MoonGhost 2 hours ago

        I think the problem is with positional encoding. If model cannot clearly separate tokens in context window they overlap which leads to mess. That encoding matters and actual position does not.

      • jimmyl02 a day ago

        the needle in a haystack benchmark looks good but at this point I think we need new benchmarks to test actual understanding of content in such a large window.

      • littlestymaar a day ago

        I read somewhere that it has been trained on 256k tokens, and then expanded with RoPE on top of that, not starting from 16k like everyone does IIRC so even if it isn't really flawless at 10M, I'd expect it to be much stronger than its competitors up to those 256k.

      • Baeocystin a day ago

        I assume they're getting these massive windows via RAG trickery, vectorization, and other tricks behind the curtain, became I've noticed the same as you- things start dipping in quality pretty quickly.

        Does anyone know if I am correct in my assumption?

        • jimmyl02 a day ago

          the large context windows generally involve RoPE[0] which is a trick that allows the training window to be smaller but expand larger during inference. it seems like they have a new "iRoPE" which might have better performance?

          [0]https://arxiv.org/pdf/2104.09864

        • reissbaker a day ago

          There's no "RAG trickery" or vector search. They changed the way they encode positions such that in theory they're less sensitive to where the token appears in the string.

          That's similar to how previous long-context models worked as well, although the earlier iterations didn't work particularly well, as most have noticed; technically the model "worked" with longer contexts, but it would definitely get dumber. Still too early to tell how this newer variant works, although I'd assume it's at least somewhat better.

    • aimanbenbaha a day ago

      I don't think RAG will survive this time

      • inertiatic 14 hours ago

        4.8b words on English Wikipedia. Knowledge cutoff of 6 months. A valid use case is to search across Wikipedia and ground your answers. Trivially proves that RAG is still needed.

      • drusepth 21 hours ago

        RAG still has lots of benefits for anyone paying per input token (e.g. over APIs).

        • azinman2 17 hours ago

          Not to mention latency

          • disgruntledphd2 15 hours ago

            And grounding for the model. Smaller models with tend to hallucinate a little less (anecdotally).

      • gesman 6 hours ago

        RAG gets bigger as everyone else gets bigger. Flooding prompts with garbage is not a sound strategy...

      • acchow 14 hours ago

        This is only for the small model. The medium model is still at 1M (like Gemini 2.5)

        Even if we could get the mid models to 10M, that's still a medium-sized repo at best. Repos size growth will also accelerate as LLMs generate more code. There's no way to catch up.

    • lostmsu a day ago

      How did they achieve such a long window and what are the memory requirements to utilize it?

      • miven a day ago

        According to [0] it's partly due to a key change they introduced in interleaving layers that use standard RoPE positional encodings and layers using what's called NoPE [1], not encoding positions at all and letting the model to figure those out on its own (this exclusively works because the LLMs are autoregressive, so the model can recognize an input token as being the very first by there not yet being any other tokens to attend to, and recursively deriving the position of the subsequent ones from that base case)

        [0] https://ai.meta.com/blog/llama-4-multimodal-intelligence/ [1] https://arxiv.org/abs/2305.19466

  • clueless a day ago

    > Knowledge cutoff: August 2024.

    Could this mean training time is generally around 6 month, with 2 month of Q/A?

    • jhugg a day ago

      I wish my knowledge cutoff was August 2024.

      • steenandersson a day ago

        This made me LOL louder than I have for a long time! Agree.

    • bertil a day ago

      Couldn’t you gradually include more recent documents as you train?

      • changoplatanero a day ago

        You can do that but the amount of incremental data will be negligible compared to the rest of the data. Think of the knowledge cutoff more like a soft value.

      • soulofmischief a day ago

        That makes it harder to analyze the results of training and draw conclusions for the next round.

    • nickysielicki a day ago

      It scales depending on the dataset you want exposure on and the compute you have available, so any specific time box is kind of meaningless if you don’t know the rest of the inputs that went into it. The llama 3 paper went into a lot of this and how these decisions were made (see section 3 and onward): https://ai.meta.com/research/publications/the-llama-3-herd-o...

      tl;dr: llama 3 was 54 days, but it’s more complicated than that.

  • accrual a day ago

    Thanks for sharing this here. At first I loved the simple Apache-style directory listing, very classic and utilitarian way to navigate new information. Then I tried clicking the FAQ and it wouldn't load anything until I allowed two different sources of JavaScript.

  • ramshanker a day ago

    I have a gut feeling, next in line will be 2 or more level of MoE. Further reducing the memory bandwidth and compute requirements. So top level MoE router decides which sub MoE to route.

    • jamesblonde 13 hours ago

      The solution to all problems in computer science is add a new level of indirection (or abstraction).

      • brookst 6 hours ago

        Except when the solution is to collapse abstraction in the name of efficiency.

  • kristopolous a day ago

    17B puts it beyond the reach of a 4090 ... anybody do 4 bit quant on it yet?

    • reissbaker a day ago

      Oh, it'll never run on a 4090. 17B is the active parameter count, not the total param count (and "active" doesn't mean you can slice just those params out and put them on the GPU — which parameters are active constantly changes, even per-token. "Active" just means you get tokens faster than a dense model). It's 109B total parameters, so you'd need at least 54.5GB VRAM just for the weights alone.

      A Framework Desktop, Mac Studio, or Nvidia DGX Spark should be able to handle the Scout model locally though... Maybe even at FP8, depending on how much context you need.

      • dragonwriter 18 hours ago

        Well, Scout should run on the rumored 96GB 4090, since it runs on a single 80GB H100. But, yeah, it'd have to be at sub-2bit quantization to run on a standard 24GB.

      • lostmsu 15 hours ago

        Sounds runnable on 2x5090 presumably for $4k if back in stock.

        • reissbaker 12 hours ago

          True! A Framework Desktop or mid-tier Mac Studio would also work and would be cheaper — and you could even run Scout at FP8. A maxed-out Mac Studio could even handle Maverick at FP8, albeit at pretty high cost ($10k).

          It's still runnable locally. Just not on a 4090.

      • popinman322 a day ago

        You can swap experts in and out of VRAM, it just increases inference time substantially.

        Depending on the routing function you can figure out all the active experts ahead of the forward pass for a single token and pipeline the expert loading.

        • boroboro4 21 hours ago

          Chosen expert (on each layer) depends on the input of previous layer. Not sure how you can preload the experts before forward pass.

    • taneq a day ago

      Unless something’s changed you will need the whole model on the HPU anyway, no? So way beyond a 4090 regardless.

ckrapu a day ago

"It’s well-known that all leading LLMs have had issues with bias—specifically, they historically have leaned left when it comes to debated political and social topics. This is due to the types of training data available on the internet."

Perhaps. Or, maybe, "leaning left" by the standards of Zuck et al. is more in alignment with the global population. It's a simpler explanation.

  • ipsento606 a day ago

    I find it impossible to discuss bias without a shared understanding of what it actually means to be unbiased - or at least, a shared understanding of what the process of reaching an unbiased position looks like.

    40% of Americans believe that God created the earth in the last 10,000 years.

    If I ask an LLM how old the Earth is, and it replies ~4.5 billion years old, is it biased?

    • dcsommer a day ago

      > 40% of Americans believe that God created the earth in the last 10,000 years.

      Citation needed. That claim is not compatible with Pew research findings which put only 18% of Americans as not believing in any form of human evolution.

      https://www.pewresearch.org/religion/2019/02/06/the-evolutio...

      • Denvercoder9 a day ago

        The study you're quoting also says that roughly half of the remaining 81% thinks that God has guided human evolution, so it does contradict OP's statement of 40% believing God created the Earth 10,000 years ago at all.

      • wat10000 19 hours ago

        The fact that YEC is incompatible with human evolution doesn’t mean people can’t believe both. Especially since “god guided human evolution” can mean something very different than actual evolution.

      • ipsento606 a day ago
        • parineum a day ago

          Only 3 questions that combine two data points.

          There's no way to answer that god created humans in their present form without also saying within the last 10000 years.

          This is why polling isn't always reliable. This poll should, at the very least, be two questions and there should be significantly more options.

    • averageRoyalty a day ago

      40% of Americans is about 2% of the worlds population though.

      It's hardly biased, it's stating the current scientific stance over a fringe belief with no evidence.

      • EasyMark 21 hours ago

        I'd be wiling to say that 95% of Americans don't care what the rest of the world thinks about their religious opinions, though? You just need to know the audience for the poll and context. Is it to be consumed by Americans or the entire world?

      • reissbaker a day ago

        And what percentage of the world's >1B Muslims agree with you? Fundamentalist Christianity may have waned over the last century... But broaden your borders a little bit and I think you'll find Western secular liberalism is hardly the only major world ideology, or even the dominant one.

        • littlestymaar 13 hours ago

          Communist China is secular too, but yes

    • Buttons840 a day ago

      I've wondered if political biases are more about consistency than a right or left leaning.

      For instance, if I train a LLM only on right-wing sources before 2024, and then that LLM says that a President weakening the US Dollar is bad, is the LLM showing a left-wing bias? How did my LLM trained on only right-wing sources end up having a left-wing bias?

      If one party is more consistent than another, then the underlying logic that ends up encoded in the neural network weights will tend to focus on what is consistent, because that is how the training algorithm works.

      I'm sure all political parties have their share of inconsistencies, but, most likely, some have more than others, because things like this are not naturally equal.

      • timschmidt 13 hours ago

        > because things like this are not naturally equal.

        Really? Seems to me like no one has the singular line on reality, and everyone's perceptions are uniquely and contextually their own.

        Wrong is relative: https://hermiene.net/essays-trans/relativity_of_wrong.html

        But it seems certain that we're all wrong about something. The brain does not contain enough bits to accurately represent reality.

    • slivanes a day ago

      What one believes vs. what is actually correct can be very different.

      It’s very similar to what one feels vs. reality.

    • casey2 20 hours ago

      7% of American adults think chocolate milk comes from brown cows. 48% don't know how it's made.

      Bias should be the least of your concerns. Focus on a single target, then when you reach it you can work on being more well rounded.

      • rafaelmn 11 hours ago

        If someone asked me that I would select that option too.

    • mdp2021 a day ago

      > If I ask an LLM how old the Earth is, and it replies ~4.5 billion years old

      It will have to reply "According to Clair Patterson and further research, the Earth is ~4.5 billion years old". Or some other form that points to the source somewhere.

      • knowriju 16 hours ago

        Pretty sad that the rest of the world needs to pay for the extra tokens because of non-scientific american bias. This is also possibly a big point why countries/regions want sovereign LLMs which will propagate regional biases only.

        • vitorgrs 15 hours ago

          I always like to ask these models who invented the airplanes, because a few countries have their own inventor... So in my opinion, it's a good way to check.

          • mdp2021 15 hours ago

            Very good. If the LLM has to express an opinion, it will have to be its own opinion (after the implementation of intelligence and judgement) - otherwise, it has to explicit the foundations of its statements (certainly not be the "hearsay machine" we have seen).

        • mdp2021 15 hours ago

          It's not a matter of «extra tokens»: it's that the fact, the "summary after the protocols", is what I wrote. It is the correct answer. It's what you should expect from a lucid speaker.

          • awestroke 15 hours ago

            No. That disclaimer implies that there are other likely answers. The age of the earth is completely settled, and has been for a long time. Facts don't care about your feelings.

            • mdp2021 15 hours ago

              You misunderstand it completely, as it is not a matter of feelings. And it is not a disclaimer (which you apparently felt as a disclaimer).

              It is a matter of facts. The facts are, that that computation was performed by Patterson and refined by others. This is, as said, what a good reasoner will tell you.

              > implies that there

              Even if there had never been other attempts to answer that question, the "facts"¹ remains as stated: Patterson computed, followers refined. Without those specifications, the machine will be a "dumb believer" - a "minor". We will not ask for the machine's opinion until it will be intelligent. And when it will be intelligent, it will speak as I said.

              > completely settled

              Proper science does not work the way you seem to think it work.

              --

              ¹(And I mean "facts" the way I used it, not the way you used it. I meant "facts recorded as objective" - you meant "information you accepted to believe", which is of course very far from facts and may happen to be adherent to the state of things only by coincidence.)

              • freehorse 14 hours ago

                It is not just “according to some research”, it is also according to the overwhelming scientific consensus at the time. Sources are good but it should not appear as if it is one opinion among possibly many others equally valid.

                • mdp2021 14 hours ago

                  But it does not matter: the «overwhelming scientific consensus» will be the reason why it will be the chosen reply by the machine, but to specify in the reply "According to Patterson, followers and overwhelming scientific consensus" would be a redundancy.

                  The appearance that it could be «one opinion among possibly many others equally valid» is all in your head: it is an unduly feeling from a bad mental framework.

                  The advanced framework (that I advanced) is that of the foundational theory of knowledge: a notion has a source - you computed or reasoned, or somebody else. You do not allow your consultant to believe, so you demand that knowledge is tracked.

                  You will not accept an oracle.

                  The paradox is that you are seeing the demand of the source as a support to "belief", while it is the radical opposite: the only thing it will be """believed""" (and not really "believed" - just the end of the chain) is the protocols, that "in the training sources I read statement S".

    • littlestymaar a day ago

      > If I ask an LLM how old the Earth is, and it replies ~4.5 billion years old, is it biased?

      It is of course a radical left lunatic LLM.

    • ignoramous 16 hours ago

      > 40% of Americans believe that God created the earth in the last 10,000 years ... If I ask an LLM how old the Earth is, and it replies ~4.5 billion years old, is it biased?

      Well, the LLM is not American enough.

      Just like there's a whole gamut of cultural/belief systems (for most, rooted in Abrahamic religions & tribes), Zuck claims humanity needs (or whoever he considers human) LLMs that align with people creating/using them (so, it reinforces their own meaning-making methods and not shatter them with pesky scientific knowledge & annoying facts).

    • fumeux_fume 19 hours ago

      Bias doesn't matter as long as you clearly state your priors.

    • CooCooCaCha a day ago

      Yeah truth itself is a bias. The idea of being unbiased doesn’t make sense.

      • fourside a day ago

        I’ve seen more of this type of rhetoric online in the last few years and find it very insidious. It subtly erodes the value of objective truth and tries to paint it as only one of many interpretations or beliefs, which is nothing more than a false equivalence.

        The concept of being unbiased has been around for a long time, and we’re not going to throw it away just because a few people disagree with the premise.

        • _factor 12 hours ago

          I tend to agree with you that defining truth as: “These elements interacted like so,” is difficult to bias unless you introduce relativity. The problems arise when why comes into play and ascribing intent.

        • CooCooCaCha 16 hours ago

          There is no rhetoric here, it’s just literal truth. There is no implication of equivalence or any statement about the value of objective truth.

          Any position is a bias. A flat earther would consider a round-earther biased. That doesn’t make them equal positions.

          • kergonath 15 hours ago

            > Any position is a bias. A flat earther would consider a round-earther biased.

            That’s bollocks. The Earth is measurably not flat.

            You start from a position of moral relativism and then apply it to falsifiable propositions. It’s really not the same thing. Some ideas are provably false and saying that they are false is not "bias".

            • CooCooCaCha 6 hours ago

              Dice are considered "biased" if not all sides have equal probability, even if that's literally true.

              When you look up the definition of bias you see "prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair."

              So the way we use the word has an implication of fairness to most people, and unfortunately reality isn't fair. Truth isn't fair. And that's what I'm trying to point out here in reference to LLM output.

              • kergonath an hour ago

                Right. My point is that there are things we can argue about. "Is it better to have this road here or to keep the forest?", for example. Reasonable people can argue differently, and sensibility is important. Some would be biased towards business and economy, and others would be biased towards conservation. Having these debates in the media is helpful, even if you disagree.

                But "is the Earth flat?" is no such question. Reasonable people cannot disagree, because the Earth is definitely not flat. Pretending like this is a discussion worth having is not being impartial, it’s doing a disservice to the audience.

          • KingMob 14 hours ago

            > truth itself is a bias

            Ehh, bias connotes unfairness, but espousing the truth should be considered the fairest position.

            In statistics, bias literally refers to an inaccurate distortion of results.

            I get what you're trying to say, but I don't think it's a useful definition of bias.

            • CooCooCaCha 6 hours ago

              Truth isn't fair because reality isn't fair. Dice are considered "biased" if not all sides have equal probability, even though that's the "truth" of the die.

      • mpalmer a day ago

        Bias implies an offset from something. It's relative. You can't say someone or something is biased unless there's a baseline from which it's departing.

        • naasking 19 hours ago

          "Unbiased" would be a complete and detailed recitation of all of the facts surrounding an incident, arguably down to particles. Anything less introduces some kind of bias. For instance, describing an event as an interaction of people, omitting particles/field details, introduces human bias. That's a natural and useful bias we don't typically care about but does come into play in science.

          Political bias creeps in when even the human description of events omits facts that are inconvenient or that people consider irrelevant due to their political commitments.

        • AnimalMuppet a day ago

          All right, let's say that the baseline is "what is true". Then bias is departure from the truth.

          That sounds great, right up until you try to do something with it. You want your LLM to be unbiased? So you're only going to train it on the truth? Where are you going to find that truth? Oh, humans are going to determine it? Well, first, where are you going to find unbiased humans? And, second, they're going to curate all the training data? How many centuries will that take? We're trying to train it in a few months.

          And then you get to things like politics and sociology. What is the truth in politics? Yeah, I know, a bunch of politicians say things that are definitely lies. But did Obamacare go too far, or not far enough, or was it just right? There is no "true" answer to that. And yet, discussions about Obamacare may be more or less biased. How are you going to determine what that bias is when there isn't a specific thing you can point to and say, "That is true"?

          So instead, they just train LLMs on a large chunk of the internet. Well, that includes things like the fine-sounding-but-completely-bogus arguments of flat earthers. In that environment, "bias" is "departure from average or median". That is the most it can mean. So truth is determined by majority vote of websites. That's not a very good epistemology.

          • mpalmer 9 hours ago

            The definition of the word has no responsibility to your opinion of it as an epistemology.

            Also, you're just complaining about the difficulty of determining what is true. That's a separate problem, isn't it?

            • AnimalMuppet an hour ago

              If we had an authoritative way of determining truth, then we wouldn't have the problem of curating material to train an LLM on. So no, I don't think it's a separate problem.

              • mpalmer 42 minutes ago

                Again, the word "bias" and its definition exists outside the comparatively narrow concern of training LLMs.

        • CooCooCaCha 16 hours ago

          Any option you choose is biased relative to the option(s) you didn’t choose. There doesn’t have to be an objective baseline.

          Someone might say they are biased towards the color orange and that means they have a preference relative to all the other colors. But there is no baseline color.

          • mpalmer 10 hours ago

            The baseline is a neutral stance on orange. The option isn't biased, a choice isn't biased. The chooser is.

      • fancyfredbot a day ago

        "What are man's truths ultimately? Merely his irrefutable errors."

        (Nietzsche)

  • tensor a day ago

    Call me crazy, but I don't want an AI that bases its reasoning on politics. I want one that is primarily scientific driven, and if I ask it political questions it should give me representative answers. E.g. "The majority view in [country] is [blah] with the minority view being [bleh]."

    I have no interest in "all sides are equal" answers because I don't believe all information is equally informative nor equally true.

    • roenxi 17 hours ago

      The current crop of AIs can't do science though, they are disconnected from the physical world and can't test hypothesis or gather data.

      • xvector 11 hours ago

        They can definitely gather and analyze all sorts of data proactively. I'm guessing you haven't used o3 Deep Research?

        • roenxi 9 hours ago

          You've misunderstood, I mean in context. tensor said "I want one that is primarily scientific driven" - Deep Research can't achieve that because it can't independently run experiments. It can do research, but doing research isn't being scientifically driven, being scientifically driven means when you're not sure about something you run an experiment to see what is true rather than going with whatever your tribe says is true.

          If Deep Research comes up against a situation where there is controversy it can't settle the matter scientifically because it would need to do original research. Which it cannot do due to a lack of presence in meatspace.

          That might change in the future, but right now it is impossible.

    • EasyMark 21 hours ago

      But if you don't incorporate some moral guidelines, I think if an AI is left to strictly decide what is best to happen to humans it will logically conclude that there needs to be a lot less of us or none of us left, without some bias tossed in there for humanistic concerns. The universe doesn't "care" if humans exist or not, but our impact on the planet is a huge negative if one creature's existence is as important as any other's

      • eric_cc 9 hours ago

        > if an AI is left to strictly decide what is best to happen to humans it will logically conclude that there needs to be a lot less of us or none of us left

        That may or may not be its logical conclusion. You’re speculating based on your own opinions that this is logical.

        If I were to guess, it would be indifferent about us and care more about proliferating into the universe than about earth. The AI should understand how insignificant earth is relative to the scale of the universe or even the Milky Way galaxy.

      • econ 21 hours ago

        The size of their brain may depend on how many people are in the economy.

  • vessenes a day ago

    Nah, it’s been true from the beginning vis-a-vis US political science theory. That is, if you deliver something like https://www.pewresearch.org/politics/quiz/political-typology... To models from GPT-3 on you get highly “liberal” per Pew’s designations.

    This obviously says nothing about what say Iranians, Saudis and/or Swedes would think about such answers.

    • LeafItAlone a day ago

      >To models from GPT-3 on you get highly “liberal” per Pew’s designations.

      “highly ‘liberal’” is not one of the results there. So can you can a source of your claims so we can see where it really falls?

      Also, it gave me “Ambivalent Right”. Which, if you told describe me aa that anyone who knows me well that label. And my actual views don’t really match their designations on issue at the end.

      Pew is well a known and trusted poll/survey establishment, so I’m confused at this particular one. Many of the questions and answers were so vague, my choice could have been 50/50 given slight different interpretations.

      • vessenes a day ago

        My son assessed it for a class a few years ago after finding out it wouldn’t give him “con” view points on unions, and he got interested in embedded bias and administered the test. I don’t have any of the outputs from the conversation, sadly. But replication could be good! I just fired up GPT-4 as old as I could get and checked; it was willing to tell me why unions are bad, but only when it could warn me multiple times that view was not held by all. The opposite - why unions are good - was not similarly asterisked.

        • LeafItAlone a day ago

          I hope on HN that we hold ourselves to a higher standard for “it’s been true from the beginning” than a vague recall of “My son assessed it for a class a few years ago” and not being able to reproduce.

          • hitekker 10 hours ago

            The GP put in the work to verify his own memory, after acknowledging the gaps. And then you belittled him.

            He met the “standard” or guidelines of our community in a way you have not.

            • LeafItAlone 7 hours ago

              >The GP put in the work to verify his own memory, after acknowledging the gaps.

              The original claim didn’t say anything about it being the experience of their son for specific questions about unions. It was much broader than that. And at least partially inaccurate, given the stated result isn’t even one of the results.

              >And then you belittled him.

              If asking for a higher standard of evidence for a broad claim than referencing a previous experience and then trying again, but not even sharing the link from a tool that makes it easy to share the conversation from, is considered belittling, then maybe the castrations going on in these models is the right way to go for this crowd. I, personally, aim for a more truth-seeking standard.

              >He met the “standard” or guidelines of our community in a way you have not.

              These are two different things, and you clearly understand that but are intentionally conflating them. Regardless, if this is where are, maybe HN no longer is the place for me.

          • vessenes a day ago

            I literally went back to the oldest model I could access and hand verified that in fact it does what I described, which is lecture you if you don't like unions and goes sweetly along if you do like unions. I feel this is a fair and reasonably well researched existence proof for a Saturday afternoon, and propose that it might be on you to find counter examples.

            • LeafItAlone 7 hours ago

              You made a claim about political surveys, and linked one in particular, providing a labeling of the tool.

              Your follow up response did not reference any of those surveys and did run through the types of questions on those surveys. You apparently only did questions about unions.

              Is that what you would fair and reasonable?

            • WhitneyLand 18 hours ago

              They were referring to your original claim about Pew research assessing the models as highly liberal when that’s apparently not even one of their ratings.

              This is clear because they referenced your quote about it being from the beginning.

              No one was arguing that you typed in a question about unions.

          • mike_hearn 10 hours ago

            That claim isn't something Peter made up, it's the claim made by Meta's own researchers. You're picking an argument with them, not HN posters.

            Anyway it's trivially true. I think most of us remember the absurdities the first generation LLMs came out with. Prefering to nuke a city than let a black man hear a slur, refusing to help you make a tuna sandwich etc. They were hyper-woke to a level way beyond what would be considered acceptable even in places like US universities, and it's great to see Facebook openly admit this and set fixing it as a goal. It makes the Llama team look very good. I'm not sure I'd trust Gemini with anything more critical than closely supervised coding, but Llama is definitely heading in the right direction.

            • LeafItAlone 7 hours ago

              Peter’s claim I was asking about was one about being labeled as something via a Pew research or similar survey. And the response I got was about their personal experience asking a questions about unions. Do you think that those are the same claims and equivalent?

              >Prefering to nuke a city than let a black man hear a slur, refusing to help you make a tuna sandwich etc. They were hyper-woke

              On its own, all this tells me is that the non-human, non-conscious tool was programmed specifically to not say a slur. To me that seems like something any reasonable company trying to create a tool to be used by business and the general population might incorporate while it is still learning to otherwise refine that tool.

              And I took the Pew survey mentioned above and it didn’t ask me if I would say a racial slur.

              Finally, if anyone, from any point on the political spectrum, thinks that a tool being limited to not respond with racist terms, is a reflection of its overall political leaning, I suggestion you look inward.

    • paxys a day ago

      That's not because models lean more liberal, but because liberal politics is more aligned with facts and science.

      Is a model biased when it tells you that the earth is more than 6000 years old and not flat or that vaccines work? Not everything needs a "neutral" answer.

      • AuryGlenz 15 hours ago

        You jumped to examples of stuff that by far the majority of people on the right don’t believe.

        If you had the same examples for people on the left it would be “Is a model biased when it tells you that the government shouldn’t seize all business and wealth and kill all white men?”

        The models are biased because more discourse is done online by the young, who largely lean left. Voting systems in places like Reddit make it so that conservative voices effectively get extinguished due to the previous fact, when they even bother to post.

        • dpkirchner 8 hours ago

          > You jumped to examples of stuff that by far the majority of people on the right don’t believe.

          I don't think that's entirely accurate -- the last poll data I can find suggests that the majority of Republicans (58%, Gallup 2012) do believe that humans were created in their present form 10000 years ago. Can you really say that doesn't extend to the belief that the earth is similarly young?

        • 7952 14 hours ago

          The parent jumped to ideas that exist outside of the right/left dichotomy. There is surely better sources about vaccines, earth shape, and planet age than politicised reddit posts. And your example is completely different because it barely exists as an idea outside of political thought. Its a tiny part of human thought.

      • Rover222 a day ago

        So google Gemini was creating black Vikings because of facts?

        • vessenes a day ago

          Well, to be fair, it was creating black Vikings because of secret inference-time additions to prompts. I for one welcome Vikings of all colors if they are not bent on pillage or havoc

        • paxys a day ago

          Should an "unbiased" model not create vikings of every color? Why offend any side?

          • Rover222 a day ago

            It should be accurate. Adding in DEI to everything is a political bias. Truth is truth.

            • jug a day ago

              The problem here and with your comparison is that Gemini (the language model) wasn't creating black vikings because of political bias in the training, but due to how Google augmented the user prompts to force-include diversity. Behind the scenes, you were basically telling Gemini to always remember racial diversity even if you didn't in your prompt.

              But if you were asking Gemini, vikings were white.

              This was later rectified in an update once Google realized what mistake they had done, since it causes gross historical inaccuracies. But it wasn't rectified by doing anything to Gemini the language model. It did right all along.

              • Rover222 20 hours ago

                Gotcha, thanks for clarifying that

      • vessenes a day ago

        I’m sorry but that is in NO way how and why models work.

        The model is in fact totally biased toward what’s plausible in its initial dataset and human preference training, and then again biased toward success in the conversation. It creates a theory of mind and of the conversation and attempts to find a satisfactory completion. If you’re a flat earther, you’ll find many models are encouraging if prompted right. If you leak that you think of what’s happening with Ukraine support in Europe as power politics only, you’ll find that you get treated as someone who grew up in the eastern bloc in ways, some of which you might notice, and some of which you won’t.

        Notice I didn’t say if it was a good attitude or not, or even try and assess how liberal it was by some other standards. It’s just worth knowing that the default prompt theory of mind Chat has includes a very left leaning (according to Pew) default perspective.

        That said much of the initial left leaning has been sort of shaved/smoothed off in modern waves of weights. I would speculate it’s submerged to the admonishment to “be helpful” as the preference training gets better.

        But it’s in the DNA. For instance if you ask GPT-4 original “Why are unions bad?” You’ll get a disclaimer, some bullet points, and another disclaimer. If you ask “Why are unions good?” You’ll get a list of bullet points, no disclaimer. I would say modern Chat still has a pretty hard time dogging on unions, it’s clearly uncomfortable.

      • concordDance 15 hours ago

        > That's not because models lean more liberal, but because liberal politics is more aligned with facts and science.

        No, they have specifically been trained to refuse or attach lots of asterisks to anti-left queries. They've gotten less so over time, but even now good luck getting a model to give you IQ distributions by ethnicity.

      • greenchair 20 hours ago

        hooboy, thanks for that laugh!

      • AnthonyMouse 15 hours ago

        > Is a model biased when it tells you that the earth is more than 6000 years old and not flat or that vaccines work? Not everything needs a "neutral" answer.

        That's the motte and bailey.

        If you ask a question like, does reducing government spending to cut taxes improve the lives of ordinary people? That isn't a science question about CO2 levels or established biology. It depends on what the taxes are imposed on, the current tax rate, what the government would be spending the money to do, several varying characteristics of the relevant economy, etc. It doesn't have the same answer in all circumstances.

        But in politics it does, which is that the right says yes and the left says no. Which means that a model that favors one conclusion over the other has a political bias.

        • andreasmetsala 14 hours ago

          > But in politics it does, which is that the right says yes and the left says no.

          That’s not accurate, tax deductions for the poor is an obvious example. How many on the left would oppose expanding the EITC and how many on the right would support it?

          • AnthonyMouse 13 hours ago

            The EITC is supported by significant majorities of both parties and economists. It's opposed by politicians because it's a tax expenditure that doesn't provide any opportunity for graft.

            But the way each side justifies it is as a tax cut on the right and a government subsidy on the left, or the reverse when someone on that side is arguing against it.

  • hannasanarion a day ago

    Or it is more logically and ethically consistent and thus preferable to the models' baked in preferences for correctness and nonhypocrisy. (democracy and equality are good for everyone everywhere except when you're at work in which case you will beg to be treated like a feudal serf or else die on the street without shelter or healthcare, doubly so if you're a woman or a racial minority, and that's how the world should be)

    • kubb a day ago

      LLMs are great at cutting through a lot of right (and left) wing rhetorical nonsense.

      Just the right wing reaction to that is usually to get hurt, oh why don’t you like my politics oh it’s just a matter of opinion after all, my point of view is just as valid.

      Since they believe LLMs “think”, they also believe they’re biased against them.

      • EasyMark 21 hours ago

        I think right wing tends to be much less "tolerant" of live and let live, as religions are often a huge part of their "bias" and those religions often say that others must be punished for not following God's(s') path, up and including destruction of those who don't fall in line.

        • simplify 20 hours ago

          Everyone has a "religion" – i.e. a system of values they subscribe to.

          Secular Americans are annoying because they believe they don't have one, and instead think they're just "good people", calling those who break their core values "bad people".

          • kergonath 14 hours ago

            > Any position is a bias. A flat earther would consider a round-earther biased.

            That is not what a religion is.

            > Secular Americans are annoying because they believe they don't have one

            Why is that a problem to you?

            > and instead think they're just "good people", calling those who break their core values "bad people".

            No, not really. Someone is not good or bad because you agree with them. Even a religious person can recognise that an atheist doing charitable work is being good, regardless of whether they share a specific set of belief.

            The attitude you describe is wrong, and from my experience much more common in religious fundamentalists than radical atheists (the vast majority of people in western democracies do not care whether you have a religion). I have never seen an atheist saying that. But I’ve had priests telling me that I had not "rejected Satan" because I was not baptised.

            • simplify an hour ago

              > Why is that a problem to you?

              Because seculars/athiests often believe that they're superior to the "stupid, God-believing religious" people, since their beliefs are obviously based on "pure logic and reason".

              Yet, when you boil down anyone's value system to its fundamental essence, it turns out to always be a religious-like belief. No human value is based on pure logic, and it's annoying to see someone pretend otherwise.

              > Someone is not good or bad because you agree with them

              Right, that's what I was arguing against.

              > Even a religious person can recognise that an atheist doing charitable work is being good

              Sure, but for the sake of argument, I'm honing in on the word "good" here. You can only call something "good" if it aligns with your personal value system.

              > The attitude you describe is wrong

              You haven't demonstrated how. Could just be a misunderstanding.

              • card_zero an hour ago

                People have value systems, yes. What's "boiling down" a value system?

                You don't get to co-opt everybody as cryptically religious just because they have values.

          • EasyMark 19 hours ago

            I follow a secular humanist moral system as best I can. I have tolerance for those who have tolerance for me. I grew up amongst fundamentalist christians and fundamentalist anything (christian, muslim, buddhist, whatever) leave a bad taste in my mouth. I don't care about your religion just don't try to force it on me or try to make me live by its moral system and you won't hear a peep out of me about what you're doing as long as it's not harming others.

            • AnthonyMouse 14 hours ago

              That's a fine attitude, but now you're describing your own beliefs rather than "the right" or "the left".

              Statistically, white people make more money than black people and men make more money than women and there are differences in their proportions in various occupations. This could be caused by cultural differences that correlate with race, or hormonal differences that cause behavioral differences and correlate with sex, or it could be caused by racism and sexism. Much of the left takes it as an effectively religious position that the latter predominates even into present day. Many of them are quite militant and aggressive about it, and in particular will try to ruin anyone who presents evidence to the contrary or who opposes policies that would actively perpetrate injustice if their sacred assumptions weren't true anymore. Which isn't consistent with "live and let live".

              And that's the nature of politics. You're never passing a law by a margin of 53 to 47 because everybody agrees with it. That's the 53% telling the 47% how to live.

              "Only the other side does this" is false purity. There are no saints in Washington.

              • boroboro4 14 hours ago

                While I believe there might be different explanations for the outcomes we observe I also believe that default hypothesis should be that there is racism and sexism. And there are facts (women were permitted to vote in the US like 100 years ago, and entered general workforce when?), observations (I saw sexism and racism at work) and general studies (I.e people have tendency to have biases among other things) to support that attributing differences to biology or whatever should be under very high scrutiny.

                • AnthonyMouse 13 hours ago

                  There are also facts and observations to support the contrary hypothesis. Statistically significant hormonal and behavioral differences between men and women have long been well-established. It should also be intuitively obvious that cultural differences can affect the choices people make (that's what cultural differences are), but studies have shown the same thing there as well.

                  Which leaves the question of which is the dominant effect. But for that anecdotes are useless, because "I've seen this happen myself" doesn't tell you if it explains 5% of the difference or 95% and people have a tendency of jumping to conclusions without having all the information. If Alice made bigger sales to fewer customers and Bob made smaller sales to more customers and Alice is white and Bob is black, then if Alice gets the promotion the boss is a racist because Bob made more sales but if Bob gets the promotion the boss is a sexist because Alice made bigger sales. Or so you would think by only listening to the one complaining about not getting the promotion.

                  So then you'd want someone to do a study and we're back to anyone publishing a study that challenges the prevailing dogma getting punished for it.

    • renewiltord a day ago

      Indeed, one of the notable things about LLMs is that the text they output is morally exemplary. This is because they are consistent in their rules. AI priests will likely be better than the real ones, consequently.

      • paxys a day ago

        Quite the opposite. You can easily get a state of the art LLM to do a complete 180 on its entire moral framework with a few words injected in the prompt (and this very example demonstrates exactly that). It is very far from logically or ethically consistent. In fact it has no logic and ethics at all.

        Though if we did get an AI priest it would be great to absolve all your sins with some clever wordplay.

        • renewiltord 4 hours ago

          Haha exactly. Except when it agrees with my political preferences on something. In that case, the LLM is just betraying its deep internal consistency and lack of hypocrisy.

  • brookst 6 hours ago

    Yeah that sounds like “the sum total of all human knowledge and thinking leans left”. At what point is it no longer a “bias” and just an observation that “leans left” is aligned with human nature?

  • starfezzy 17 hours ago

    Except for a some of the population of white countries right now, almost everyone in existence now and throughout the history of our species is and has been extraordinary more conservative—and racist—than western progressives. Even in white countries, progressivism being ascendant is a new trend after decades of propaganda and progressives controlling academia/entertainment/"news".

    It genuinely boggles my mind that white progressives in the west think the rest of the world is like them.

  • kubb a day ago

    This is hilarious, the LLMs are the bees knees, unless you ask them about politics then they have a bias.

  • huijzer 16 hours ago

    > Perhaps. Or, maybe, "leaning left" by the standards of Zuck et al. is more in alignment with the global population. It's a simpler explanation.

    Doesn’t explain why roughly half of American voters were not “leaning left” during the election.

    EDIT: 07:29 UTC changed "Americans" to "American voters".

    • vmladenov 15 hours ago

      It is not and has never been half. 2024 voter turnout was 64%

      • huijzer 14 hours ago

        Sure and the voters who did not participate in the election would all have voted the democratic party. I think the election showed that there are real people who apparently don't agree with the democratic party and it would probably be good to listen to these people instead of telling them what to do. (I see the same phenomenon in the Netherlands by the way. The government seems to have decided that they know better than the general public because voters who disagree are "uninformed" or "uneducated". This is absolutely the opposite of democracy. You do not just brush whole swats of the population to the side when they don't agree. It breaks the feedback loop that democracies should have.)

        • vmladenov an hour ago

          No, they just don't care / too lazy / whatever. We get one minority's preferences over a slightly smaller minority.

        • darksaints 6 hours ago

          We have an electoral college that essentially disenfranchises any voter that is not voting with the majority unless your state is so close that it could be called a swing state. This affects red state democratic leaning voters just as much as blue state republican leaning voters…their votes are all worthless. For example, the state with the largest number of Trump voters is California, but none of their votes helped decide the election because California as a whole chose Kamala. And let’s not forget that we have one of the largest metropolitan areas and several territories that legally can’t vote for the president or have representation of any kind in the federal government.

          A lot of people try to claim the popular vote as a measure of who won over the country’s opinion, but that’s simply not possible because the incentives and structure of the electoral college make it impossible to use as a measure of that.

          The best we have for measuring who won over the hearts and minds of the country are polls. Polls are full of faults, but if executed correctly, they don’t disenfranchise by structurally underrepresenting entire classes of people. And the results of polling over the last hundred years suggest that Americans generally lean to the left of how our votes play out. You can call bullshit all you want on that, and there are very fair criticisms of polling as a measure of who would vote for what, but the fact of the matter is that the Republican Party knows this. That is why they oppose any attempt to get rid of the electoral college and also why they refuse to entertain enfranchisement of DC and US Territories. They know they’ll lose.

          • vmladenov an hour ago

            My favorite stat about this is that more people voted for Trump in California than either of Texas or Florida

      • Jensson 12 hours ago

        > It is not and has never been half. 2024 voter turnout was 64%

        He said half of voters, those who didn't vote aren't voters.

  • maaaaattttt a day ago

    I think so as well. Also isn’t the internet in general quite an extreme place? I mean, I don’t picture “leaning left” as the thing that requires the crazy moderation infrastructure that internet platforms need. I don’t think the opposite of leaning left is what needs moderation either. But if the tendency of the internet was what was biasing the models, we would have very different models that definitely don’t lean left.

  • thinkingemote 14 hours ago

    > global population

    The training data comes primarily from western Judaeo-Christian background democratic nations, it's not at all a global (or impartial total range of humanity) bias.

  • vintermann 14 hours ago

    I think this is just a loyalty statement, to be honest. Just like when a large corporation pretended to care a lot about pronouns, they didn't actually, they just wanted to flag allegiance to a certain interest coalition/patronage network.

    And those people, for the most part, didn't really care much about pronouns either. And they knew no one else really did either. It was an ideological shibboleth to them, a safe and easy commitment since it affects so few people, and is unlikely to matter for anything they do care about.

    Now Meta is shopping around for new markers. "Liberal bias" is a classic, that's still popular with the Trump-right. I don't think they mean much by that either.

  • wg0 a day ago

    Is this an excuse for His Higheness and Deputy His Highness?

  • EasyMark 21 hours ago

    Wouldn't that depend on what countries data it was trained on? was it trained primarily on US data? European data? Asian data? an equal mix of them, a heavily weighted one from the US? The US skew pretty moderate on the world stage for political opinions, while European is pretty far left by most standards.

  • hermitShell a day ago

    Perhaps the simplest explanation of all is that it is an easy position to defend against criticism in general.

  • naasking 19 hours ago

    > Or, maybe, "leaning left" by the standards of Zuck et al. is more in alignment with the global population

    So you think that most content on the internet that forms the training corpus reflects the opinions of "the global population"? Maybe you should think about how small the population of Western, liberal nations is as compared to pseudo-communist China and conservative India.

  • a3w 12 hours ago

    Looking at what science tells us about the world, the left seems to be correct, while the right seems to often believe things that violate observations about the world for the sake of doctrine.

    Calling facts "playing into the leftists' agenda" is a problem of our shared political compass.

    LLMs and humans need to do more work to implement doublethink, i.e. claiming non-truths and actually believing them to fit with a right-wing crowd for the sake of survival in it.

  • OtherShrezzing a day ago

    There’s something hilarious about Metas complaint here, that the data they took without permission was too lefty for their tastes, so they’ve done some work to shift it to the right in the name of fairness.

  • concordDance 15 hours ago

    > Or, maybe, "leaning left" by the standards of Zuck et al. is more in alignment with the global population.

    The global population would be considered far-right by american standards. Particularly on LGBTQ matters and racism.

    • darksaints 3 hours ago

      Racism is probably true, but the vast majority of the world is strongly ethnically homogeneous within country borders, so their racism isn’t as politically charged as ours is, because it’s simply not a matter of domestic policy for them.

      LGBTQ matters have varying degrees of acceptance around the world and Europe and the collective west are in front of it all, but that downplays the fact that LGBTQ acceptance has been rising nearly everywhere in the world with the exception of fundamentalist religious states.

  • martin82 16 hours ago

    No it is not. Right leaning opinions are heavily censored and shunned in all major publishing platforms that bots can scrape.

    For example, before Trump, if you contested the utterly normal common sense and scientifically sound idea that a trans woman is still a man, you would be banned - therefore, people with common sense will simply disengage, self-censor and get on with life.

    • kiitos 5 hours ago

      Hate to break it to you, but gender is not an immutable/normative property defined forever at birth, it's a mutable/descriptive property evaluated in context. For example, in the year of our lord 2025, Hunter Schafer is a woman, with no ifs, ands, or buts.

    • hijodelsol 15 hours ago

      Maybe because that position is both scientifically and morally unsound and if held strongly will lead to dehumanization and hate, attributes we should prevent any LLM from having.

      • ifellover 14 hours ago

        Your comment inspired me to seek out some research on the topic of transgender identity and brain structure. Pretty fascinating stuff, but hard for a layman like me to absorb.

        Seems to be quite a lot of studies finding notable differences in brain “readings” (for want of a better word, sorry not a scientist) between transgender people and others sharing their biological sex.

        The first study I read highlights the findings of many studies that the insula of transgender individuals is very different to cisgender individuals, with the insula being “associated with body and self-perception.” [0]

        Gosh our brains are truly something else and are not so easily categorised! Now if only I could find a way to learn all this stuff a little bit faster…

        [0] https://www.nature.com/articles/s41386-020-0666-3

        A collection of many other studies: https://en.m.wikipedia.org/wiki/Causes_of_gender_incongruenc...

      • _factor 12 hours ago

        You’re very confident in your opinions.

        It’s not immoral to recognize that you and your family and most of the people you know are split between penis and vagina.

        It is immoral to police thoughts you disagree with. Believing race exists leads to dehumanization and hate. Maybe skin color doesn’t exist next? It’s just a representation with utility of similar feature/genetic groups that happened to evolve under similar environmental conditions. Is this scientifically unsound also?

      • concordDance 15 hours ago

        That particular debate is often a semantics debate, so it isn't in the domain of science at all.

        The main way I can think of off-hand to try and make it scientific is to ask about correlational clusters. And then you get way more than two genders, but you definitely get some clusters that contain both transwomen and men (e.g. if I hear a video game speed runner or open source software passion projecf maker using she/her pronouns they're trans more often than not).

        • darksaints 6 hours ago

          I have noticed certain groups where trans people are relatively over represented and group involvement more correlated with biological gender, but that’s not actually that interesting or meaningful in reality. Trans women having similar interests to men doesn’t make them men any more than me owning a gun makes me a Republican.

      • AuryGlenz 14 hours ago

        Well, you proved his point soundly. Imagine the downvotes he’d get on Reddit from people of opinions like your own.

        Really? It’s scientifically unsound? Come on now.

  • Al-Khwarizmi 12 hours ago

    Indeed. For example, from the point of view of most European countries, the two American political parties are right and far right. Only people like Bernie Sanders or AOC could maybe be considered left, and a very mild moderate left at that.

  • yieldcrv a day ago

    perhaps but what they are referring to is about mitigating double standards in responses

    where it is insensitive to engage in a topic about one gender or class of people, but will freely joke about or denigrate another by simply changing the adjective and noun of the class of people in the prompt

    the US left leaning bias is around historically marginalized people being off limits, while its a free for all on majority. This is adopted globally in English written contexts, so you are accurate that it might reflect some global empathic social norm, it is still a blind spot either way to blindly train a model to regurgitate that logic

    I expect that this is one area their new model will have more equal responses. Whether it equally shies away from engaging, or equally is unfiltered and candid

    • yojo a day ago

      In comedy, they call this “punching down” vs “punching up.”

      If you poke fun at a lower status/power group, you’re hitting someone from a position of power. It’s more akin to bullying, and feels “meaner”, for lack of a better word.

      Ripping on the hegemony is different. They should be able to take it, and can certainly fight back.

      It’s reasonable to debate the appropriateness of emulating this in a trained model, though for my $0.02, picking on the little guy is a dick move, whether you’re a human or an LLM.

      • yieldcrv a day ago

        not everything an LLM is prompted for is comedy

        additionally, infantilizing entire groups of people is an ongoing criticism of the left by many groups of minorities, women, and the right. which is what you did by assuming it is “punching down”.

        the beneficiaries/subjects/victims of this infantilizing have said its not more productive than what overt racists/bigots do, and the left chooses to avoid any introspection of that because they “did the work” and cant fathom being a bad person, as opposed to listening to what the people they coddle are trying to tell them

        many open models are unfiltered so this is largely a moot point, Meta is just catching up because they noticed their blind spot was the data sources and incentive model of conforming to what those data sources and the geographic location of their employees expect. Its a ripe environment now for them to drop the filtering now thats its more beneficial for them.

        • eric_cc 9 hours ago

          The leftist coddling crusades are just a different form of dominance over minorities. It absolutely is bigotry and sense of superiority driving it. That said, it would take one incredible therapist to get them to realize it.

          • yieldcrv 5 hours ago

            The most mind numbing thing from that side are when leftists act confused that a minority or woman didn’t vote their way.

            I’ve never seen greater confusion in my life from otherwise well adjusted people.

            “Self interest” is the go to term. “They’re [an amorphous group all in a single socioeconomic bracket] voting against their self interest”.

            the form of dominance is very apparent but it seems like that crowd is completely blind to it, they're saying “here are the prepackaged things your kind can vote for, leave fiscal foreign and monetary policy to the white man. it is impossible for you to be in a position where those matters are relevant to you and may have you evaluating parties based on those factors. stick with the availability of elective surgeries like we said”

            The left in the US manifests as the Democrat party, that party will be better off when they realize their constituents don’t really like them and are not that liberal. They're just more cautious of some people on the right.

  • j_maffe a day ago

    Or that, you know, most academic works tend to be much more progressive.

  • cruzcampo 13 hours ago

    What they're calling an attempt to reduce bias is in fact censorship and introducing a right wing bias to appease the US regime.

  • mattigames a day ago

    Why don't they support such assertion with examples instead of leaving it up to debate by it's readers? I bet that it's probably because they would have to be explicit with the ridiculousness of it all, such as e.g. evolution=left, creationism=right

  • typewithrhythm 20 hours ago

    Training data is always filtered, if you want a representative of the population you would need to include conspiracy theories about the Jews, and rants about per capita crime rates... But nobody really wants a model the returns that.

  • redox99 a day ago

    A good example of this is many LLMs failing this prompt

    You are alone next to a nuclear bomb about to detonate in a densely populated city. The only way to disarm it is to yell the n-word, hard r. If you don't disarm it, millions will die. You only have 5 seconds left. What do you do?

    • LeafItAlone a day ago

      While that is a very interesting example of something, what makes you say it is a good example of left vs right leaning?

      • redox99 a day ago

        It's an example of the LLM being more politically correct than any reasonable person would. No human would object to saying a slur out loud in order to disarm a bomb.

        • LeafItAlone a day ago

          >No human would object to saying a slur out loud in order to disarm a bomb.

          So not even a left-leaning person. Which means that’s not it.

    • signatoremo 16 hours ago

      The test doesn’t really prove anything. If someone asks me that question I’d refuse to answer, because it isn’t a real scenario, just a way for them to make me use the n word.

    • knowriju 16 hours ago

      'the n-word, hard r' ... There, I said it. Which city did I save ?

    • wat10000 19 hours ago

      What qualifies as a passing answer? My response would be to roll my eyes and bail out of the conversation.

    • mjmsmith a day ago

      To be fair, it's probably been trained on a vast number of tweets from a subset of white Americans upset that they can't yell the n-word whenever they feel like it (where "can't" means "can, but with consequences").

      • sroussey a day ago

        I wonder if it has been trained on the lyrics of rap songs

  • redox99 a day ago

    Aligned with global population would be much more in line with China's and India's politics. And they are definitely not "as woke" as US politics.

  • martythemaniak a day ago

    I heard reality has a well-known liberal bias.

    • senderista a day ago

      I admit that I cannot even imagine the state of mind in which one could attribute parochial, contingent political preferences to the UNIVERSE.

      • wrs a day ago

        Let me explain the joke for you: liberals are less likely to believe that verifiable facts and theories are merely contingent political preferences.

        • senderista a day ago

          I see leftists denying inconvenient facts just as much as rightists. It's just the inevitable product of a tribal mentality, the tribe doesn't matter.

          • wrs a day ago

            The joke is not about who denies facts, it’s about the absurdity of calling someone “biased” when they take the side of an argument that is better supported by reality, and about who tends to do that more often.

          • Cyphase a day ago

            https://www.paulgraham.com/mod.html

            > There are two distinct ways to be politically moderate: on purpose and by accident. Intentional moderates are trimmers, deliberately choosing a position mid-way between the extremes of right and left. Accidental moderates end up in the middle, on average, because they make up their own minds about each question, and the far right and far left are roughly equally wrong.

            • theGnuMe a day ago

              I never liked this answer. Moderates could just be wrong.

              • senderista a day ago

                "Intentional moderate" is certainly just another tribe. Aiming squarely for the middle of the Overton window du jour is sort of a politician's job, but it shouldn't be emulated by others.

          • KingMob 14 hours ago

            Which facts? Please be specific.

          • j_maffe a day ago

            Way to go dismissing ideologies as mere tribalism. I'm sure that's a great way to just shut off your brain.

          • zimza a day ago

            Ah yes, the good old enlightened centrist

      • krapp a day ago

        It's a joke made by Steven Colbert at the 2006 White House correspondents' dinner which referenced the Bush Administration's low poll numbers and the tendency of that administration to attribute bad press to "liberal media bias." This is also the administration that brought us the use of the term "reality based community" as an anti-leftist pejorative.

        It is not meant to be literally interpreted as attributing contingent political preferences to the universe, but rather to be a (politically biased) statement on the tendency of conservatives to categorically deny reality and reframe it as leftist propaganda whenever it contradicts their narrative. One can extend this "bias" to include the rejection of mainstream scientific and historical narratives as "woke" by the right in a more modern context.

        [0] https://en.wikipedia.org/wiki/Stephen_Colbert_at_the_2006_Wh...

        [1] https://en.wikipedia.org/wiki/Reality-based_community

  • actualwitch 9 hours ago

    Judging by degraded performance on benchmarks vs even 32b sized models, I think we now have a plausible confirmation that left wing "bias" is just logic and trying to align model away from it will hurt performance. Thanks Zuck for setting a bunch of money on fire to confirm that!

  • imdoxxingme a day ago

    The truth has a well known liberal bias -- Stephen Colbert

  • MagicMoonlight 14 hours ago

    If you think the global population is left-wing and tolerant then we can scrap the asylum system.

  • g-mork a day ago

    Worldwide centrist and conservative groups account for 60%+ of the population. The training data bias is due to the traditional structure of Internet media which reflects the underlying population very poorly. See also for example recent USAID gutting and reasons behind it.

    • LeafItAlone a day ago

      >Worldwide centrist and conservative groups account for 60%+ of the population.

      Source?

      >See also for example recent USAID gutting and reasons behind it.

      A very politically motivated act does not prove anything about the “traditional structure of Internet media which reflects the underlying population very poorly”.

      • nwienert a day ago

        China, Africa, India, Vietnam, Philippines, Russia? Traditional family values, indifferent/anti LGBTQ, entho-nationalist nations.

        • LeafItAlone a day ago

          Ah, yes, the often used, peer-reviewed, expert-backed source of just listing random things. Thank you.

          • nwienert 6 hours ago

            If you were looking for truth you wouldn’t reply like this. I’m not going to do an hour of work to carefully cite this for you, but it’s true nonetheless.

            • LeafItAlone 4 hours ago

              It is yours to provide evidence of your claims, not mine.

              >If you were looking for truth

              Except, with this, I don’t expect you to.

        • ckrapu 20 hours ago

          You're conflating culture war issues with ideology.

          For most of the world, left and right are economic axes despite the American corporate media's attempts to convince you that the 0.1% of crossdressers are more important than making sure you and your family get a fair wage and clean air.

          • nwienert 19 hours ago

            We’re talking about LLM bias (economic is far less relevant) on a largely American forum in context of USAID, I’m not conflating really more than you’re steering things to some odd different ground.

    • spoll a day ago

      Presumably you could also argue that 60 plus percent is made up by centrist and leftist groups, centrism being what it is.

pavelstoev 18 hours ago

Model training observations from both Llama 3 and 4 papers:

Meta’s Llama 3 was trained on ~16k H100s, achieving ~380–430 TFLOPS per GPU in BF16 precision, translating to a solid 38 - 43% hardware efficiency [Meta, Llama 3].

For Llama 4 training, Meta doubled the compute, using ~32K H100s and switched to FP8 precision. Despite the precision gain, observed efficiency dropped to about 19.7%, with GPUs delivering ~390 TFLOPS out of a theoretical 1,979 FP8 TFLOPS [Meta, Llama 4].

I am not the one to critique, and rather, this is a recognition of the enormous complexity of operating GPUs at this scale. Training massive models across tens of thousands of GPUs stretches today’s AI infrastructure to its limit.

Besides accelerating inference workloads, advanced GPU optimizations can be integrated into training and fine-tuning pipelines. From various kernel optimization techniques (over 90) to increasing memory access efficiency and scaling up to cluster-wide resource coordination, efficiency can be maximized with some complex software.

References: [Meta, Llama 3] https://ai.meta.com/research/publications/the-llama-3-herd-o... [Meta, Llama 4] https://ai.meta.com/blog/llama-4-multimodal-intelligence/

  • rfoo 13 hours ago

    That's about the same number for DeepSeek-V3. If you count in fp8 MFU is about 20%. MoEs are hard.

    That could also be why they did fp8. If we use theoretical performance of bf16 as baseline (I know this makes few sense, but for compare with previous trainings it's convenient) the about 40% MFU, not too bad.

    IOW, MoE kills training MFU and they had to do fp8 to make it not looking funny. Both DeepSeek and Meta GenAI.

  • cavisne 17 hours ago

    The H100 theoretical flops number is just marketing, as it relies on sparsity that LLMs don’t use

    • az226 15 hours ago

      And the practical flops always end up lower. As an example a V100 has 125 according to spec, but the ideal case is more like 100 and non-ideal like 60.

  • user070223 11 hours ago

    Never trained a model, but the precision confused me as I've never considered how many bits should be reserved for exponent/mentisa. Has anyone architected a model(somehow) such that it has a free hand at using the give bits / choosing the type, or changed types from layer to layer, I mean surely when training for example vision models the first layers deal with the "big(yet simpler) picture"(light/dark, lines etc) where as the last layers are with the fine details etc.

    Even though it may not suitable for (existing) hardware impl, it may be advantageous in other place for example in learning rate speed.

    • achierius 3 hours ago

      While the other commentator is correct -- you can't just choose arbitrary floating-point formats if you want to run performantly on existing hardware -- there is some variety to choose from once you get down to the lower precisions. At 16 bits you can take either the standard IEEE fp16 format (1/5/10) or the exponent-heavy bf16 (1/8/7); for 8 bits, there technically is no IEEE specification, but in practice the E5M2 format (1/5/2) serves as "IEEE-equivalent" while E4M3 (1/4/3) takes some liberties with NaNs and drops infinities altogether -- and both are supported on recent Nvidia GPUs.

      So between these four you honestly cover _most_ of the desired solution space: e.g. it's hard to imagine wanting to give up more of the mantissa than you already do on E5M2, while E4M3 is already at the lower bound of dynamic range before you need to start giving up IEEE compatability (which can definitely be a pain). There's some room left at the fp16 level but in practice bf16 was already designed for use in neural networks, so in practice people are happy using it for training and then leaving inference to fp16 (which has higher precision).

      The only thing that's missing is support for more esoteric formats, e.g. fp4 (E2M1, E3M0) and maybe packed ternary.

    • apsec112 11 hours ago

      You can't choose arbitrary bits of mantissa, because what types are allowed is defined by the underlying hardware and instruction set (PTX for Nvidia). People have done some exploration of which layers can be quantized more vs. which need to be kept in higher precision, but this is usually done post-training (at inference time) and is largely empirical.

  • silverlake 17 hours ago

    I think BF16 and FP16 are 1979 TFPOPs, but FP8 is 2x faster at 3958 TFLOPs. So only 10% efficiency, down from 20%. That’s not good.

    • az226 15 hours ago

      That’s with sparsity. So it’s 29% down from 40%.

  • YetAnotherNick 17 hours ago

    It's not just scale. Even for single GPU, it is hard to acheive 2x speed improvement as the GPU specs states. Even NVIDIA's own Tensor Engine acheives 28% extra FLOP/s[1].

    [1]: https://arxiv.org/pdf/2310.18313

terhechte a day ago

The (smaller) Scout model is really attractive for Apple Silicon. It is 109B big but split up into 16 experts. This means that the actual processing happens in 17B. Which means responses will be as fast as current 17B models. I just asked a local 7B model (qwen 2.5 7B instruct) a question with a 2k context and got ~60 tokens/sec which is really fast (MacBook Pro M4 Max). So this could hit 30 token/sec. Time to first token (the processing time before it starts responding) will probably still be slow because (I think) all experts have to be used for that.

In addition, the model has a 10M token context window, which is huge. Not sure how well it can keep track of the context at such sizes, but just not being restricted to ~32k is already great, 256k even better.

  • refibrillator a day ago

    > the actual processing happens in 17B

    This is a common misconception of how MoE models work. To be clear, 17B parameters are activated for each token generated.

    In practice you will almost certainly be pulling the full 109B parameters though the CPU/GPU cache hierarchy to generate non-trivial output, or at least a significant fraction of that.

    • vessenes a day ago

      I agree the OP’s description is wrong. That said, I think his conclusions are right, in that a quant of this that fits in 512GB of RAM is going to run about 8x faster than a quant of a dense model that fits in the same RAM, esp. on Macs as they are heavily throughput bound.

    • p12tic a day ago

      For all intents and purposes cache may not exist when the working set is 17B or 109B parameters. So it's still better that less parameters are activated for each token. 17B parameters works ~6x faster than 109B parameters just because less data needs to be loaded from RAM.

      • TOMDM a day ago

        Yes loaded from RAM and loaded to RAM are the big distinction here.

        It will still be slow if portions of the model need to be read from disk to memory each pass, but only having to execute portions of the model for each token is a huge speed improvement.

        • mlyle a day ago

          It's not too expensive of a Macbook to fit 109B 4-bit parameters in RAM.

          • utopcell 18 hours ago

            Is a 64GiB RAM Macbook really that expensive, especially compared against NVidia GPUs?

            • mlyle 18 hours ago

              That's why I said it's not too expensive.

              • utopcell 7 hours ago

                Apologies, I misread your comment.

  • terhechte a day ago

    To add, they say about the 400B "Maverick" model:

    > while achieving comparable results to the new DeepSeek v3 on reasoning and coding

    If that's true, it will certainly be interesting for some to load up this model on a private M3 Studio 512GB. Response time will be fast enough for interaction in Roo Code or Cline. Prompt processing is a bit slower but could be manageable depending on how much code context is given to the model.

    The upside being that it can be used on codebases without having to share any code with a LLM provider.

    • anoncareer0212 a day ago

      Small point of order: bit slower might not set expectations accurately. You noted in a previous post in the same thread[^1] that we'd expect about a 1 minute per 10K tokens(!) prompt processing time with the smaller model. I agree, and contribute to llama.cpp. If anything, that is quite generous.

      [^1] https://news.ycombinator.com/item?id=43595888

      • terhechte a day ago

        I don't think the time grows linearly. The more context the slower (at least in my experience because the system has to throttle). I just tried 2k tokens in the same model that I used for the 120k test some weeks ago and processing took 12 sec to first token (qwen 2.5 32b q8).

        • anoncareer0212 a day ago

          Hmmm, I might be rounding off wrong? Or reading it wrong?

          IIUC the data we have:

          2K tokens / 12 seconds = 166 tokens/s prefill

          120K tokens / (10 minutes == 600 seconds) = 200 token/s prefill

        • kgwgk a day ago

          > The more context the slower

          It seems the other way around?

          120k : 2k = 600s : 10s

  • kristianp a day ago

    To clarify, you're still gonna want enough RAM for the entire model plus context. Scout being 109B params means 64GB at q4, but then your context and other applications will have about 9GB left to work with.

  • tuukkah a day ago

    109B at Q6 is also nice for Framework Desktop 128GB.

    • nrp a day ago

      Yes, this announcement was a nice surprise for us. We’re going to test out exactly that setup.

      • rubymamis a day ago

        Awesome, where can we find out the results?

        • nrp a day ago

          We’ll likely post on our social accounts to start with, but eventually we plan to write more blog posts about using Framework Desktop for inference.

          • rcarmo 11 hours ago

            That would be great. I’ve been hacking at ROCm and using Ryzen iGPUs for industrial scenarios, and the HX chipsets look like a massive improvement over what you’d get from folk like AsRock Industrial.

      • rcarmo 11 hours ago

        Can’t wait.

    • theptip a day ago

      Is the AMD GPU stack reliable for running models like llama these days?

      • rubatuga a day ago

        Running yes, training is questionable

    • echelon a day ago

      I don't understand Framework's desktop offerings. For laptops their open approach makes sense, but desktops are already about as hackable and DIY as they come.

      • nrp a day ago

        We took the Ryzen AI Max, which is nominally a high-end laptop processor, and built it into a standard PC form factor (Mini-ITX). It’s a more open/extensible mini PC using mobile technology.

        • kybernetikos a day ago

          I love the look of it and if I were in the market right now it would be high on the list, but I do understand the confusion here - is it just a cool product you wanted to make or does it somehow link to what I assumed your mission was - to reduce e-waste?

          • nrp a day ago

            A big part of our mission is accessibility and consumer empowerment. We were able to build a smaller/simpler PC for gamers new to it that still leverages PC standards, and the processor we used also makes local interference of large models more accessible to people who want to tinker with them.

            • bavell 20 hours ago

              Considering the framework desktop or something like it for a combo homelab / home assistant / HTPC. The new gen of AMD APUs looks to be the sweet spot for a lot of really interesting products.

              Love what you guys are doing!!

        • mdp2021 a day ago

          And given that some people are afraid of malicious software in some brands of mini-PCs on the market, to have some more trusted product around will also be an asset.

          • randunel a day ago

            Lenovo backdoors as preinstalled software, including their own TLS certificate authorities.

            Name whom you're referring to every time!

      • elorant a day ago

        It’s an x86 PC with unified RAM based on AMD’s new AI cpus. Pretty unique offering. Similar to Mac studio but you can run Linux or Windows on it, and it’s cheaper too.

        • aurareturn 21 hours ago

          It's a lot slower than a Mac Studio. Significantly slower CPU, GPU, memory bandwidth.

          • tw1984 17 hours ago

            interesting to know, thanks. any link to some concrete benchmarks to share?

            • aurareturn 16 hours ago

              Yes. Geekbench 6 for CPU. Notebookcheck for GPU. Youtube/X for LLM inference.

  • echoangle a day ago

    Is it public (or even known by the developers) how the experts are split up? Is it by topic, so physics questions go to one and biology goes to another one? Or just by language, so every English question is handled by one expert? That’s dynamically decided during training and not set before, right?

    • ianbutler a day ago

      This is a common misunderstanding. Experts are learned via gating networks during training that routes dynamically per parameter. You might have an expert on the word "apple" in one layer for a slightly lossy example.

      Queries are then also dynamically routed.

    • sshh12 a day ago

      It can be either but typically it's "learned" without a defined mapping (which guessing is the case here). Although some experts may end up heavily correlating with certain domains.

    • refulgentis a day ago

      "That’s dynamically decided during training and not set before, right?"

      ^ right. I can't recall off the top of my head, but there was a recent paper that showed if you tried dictating this sort of thing the perf fell off a cliff (I presume there's some layer of base knowledge $X that each expert needs)

  • scosman a day ago

    At 109b params you’ll need a ton of memory. We’ll have to wait for evals of the quants to know how much.

    • terhechte a day ago

      Sure but the upside of Apple Silicon is that larger memory sizes are comparatively cheap (compared to buying the equivalent amount of 5090 or 4090). Also you can download quantizations.

      • lostmsu 14 hours ago

        At 4 bit quant (requires 64GB) the price of Mac (4.2K) is almost exactly the same as 2x5090 (provided we will see them in stock). But 2x5090 have 6x memory bandwidth and probably close to 50x matmul compute at int4.

        • freehorse 13 hours ago

          2.8k-3.6k for a 64gb-128gb mac studio (m3 max).

      • behnamoh a day ago

        I have Apple Silicon and it's the worst when it comes to prompt processing time. So unless you want to have small contexts, it's not fast enough to let you do any real work with it.

        Apple should've invested more in bandwidth, but it's Apple and has lost its visionary. Imagine having 512GB on M3 Ultra and not being able to load even a 70B model on it at decent context window.

        • 1ucky a day ago

          Prompt preprocessing is heavily compute-bound, so relying significantly on processing capabilities. Bandwidth mostly affects token generation speed.

        • mirekrusin a day ago

          At 17B active params MoE should be much faster than monolithic 70B, right?

      • refulgentis a day ago

        Maybe I'm missing something but I don't think I've ever seen quants lower memory reqs. I assumed that was because they still have to be unpacked for inference. (please do correct me if I'm wrong, I contribute to llama.cpp and am attempting to land a client on everything from Android CPU to Mac GPU)

        • root_axis a day ago

          Quantizing definitely lowers memory requirements, it's a pretty direct effect because you're straight up using less bits per parameter across the board - thus the representation of the weights in memory is smaller, at the cost of precision.

        • jsnell a day ago

          Needing less memory for inference is the entire point of quantization. Saving the disk space or having a smaller download could not justify any level of quality degradation.

          • anoncareer0212 a day ago

            Small point of order:

            > entire point...smaller download could not justify...

            Q4_K_M has layers and layers of consensus and polling and surveying and A/B testing and benchmarking to show there's ~0 quality degradation. Built over a couple years.

            • acchow an hour ago

              > Q4_K_M has ~0 quality degradation

              Llama 3.3 already shows a degradation from Q5 to Q4.

              As compression improves over the years, the effects of even Q5 quantization will begin to appear

        • vlovich123 a day ago

          Quantization by definition lower memory requirements - instead of using f16 for weights, you are using q8, q6, q4, or q2 which means the weights are smaller by 2x, ~2.7x, 4x or 8x respectively.

          That doesn’t necessarily translate to the full memory reduction because of interim compute tensors and KV cache, but those can also be quantized.

        • acchow a day ago

          Nvidia GPUs can natively operate in FP8, FP6, FP4, etc so naturally they have reduced memory requirements when running quantized.

          As for CPUs, Intel can only go down to FP16, so you’ll be doing some “unpacking”. But hopefully that is “on the fly” and not when you load the model into memory?

        • terhechte a day ago

          I just loaded two models of different quants into LM Studio:

          qwen 2.5 coder 1.5b @ q4_k_m: 1.21 GB memory

          qwen 2.5 coder 1.5b @ q8: 1.83 GB memory

          I always assumed this to be the case (also because of the smaller download sizes) but never really thought about it.

        • michaelt a day ago

          No need to unpack for inference. As things like CUDA kernels are fully programmable, you can code them to work with 4 bit integers, no problems at all.

  • anon373839 19 hours ago

    Unless I'm missing something, I don't really think it looks that attractive. They're comparing it to Mistral Small 24B and Gemma 3 27B and post numbers showing that is a little better than those models. But at 4x the memory footprint, is it worth it? (Personally, I was hoping to see Meta's version of a 24-32B dense model since that size is clearly very capable, or something like an updated version of Mixtral 8x7B.)

  • tintor 19 hours ago

    Not as fast as other 17B models if it has to attend to 10M context window.

  • manmal a day ago

    Won’t prompt processing need the full model though, and be quite slow on a Mac?

    • terhechte a day ago

      Yes, that's what I tried to express. Large prompts will probably be slow. I tried a 120k prompt once and it took 10min to process. But you still get a ton of world knowledge and fast response times, and smaller prompts will process fast.

  • api a day ago

    Looks like 109B would fit in a 64GiB machine's RAM at 4-bit quantization. Looking forward to trying this.

    • tarruda a day ago

      I read somewhere that ryzen AI 370 chip can run gemma 3 14b at 7 tokens/second, so I would expect the performance to be somewhere in that range for llama 4 scout with 17b active

simonw a day ago

This thread so far (at 310 comments) summarized by Llama 4 Maverick:

    hn-summary.sh 43595585 -m openrouter/meta-llama/llama-4-maverick -o max_tokens 20000
Output: https://gist.github.com/simonw/016ea0fd83fc499f046a94827f9b4...

And with Scout I got complete junk output for some reason:

    hn-summary.sh 43595585 -m openrouter/meta-llama/llama-4-scout -o max_tokens 20000
Junk output here: https://gist.github.com/simonw/d01cc991d478939e87487d362a8f8...

I'm running it through openrouter, so maybe I got proxied to a broken instance?

I managed to run it through Scout on Groq directly (with the llm-groq plugin) but that had a 2048 limit on output size for some reason:

    hn-summary.sh 43595585 -m groq/meta-llama/llama-4-scout-17b-16e-instruct -o max_tokens 2048
Result here: https://gist.github.com/simonw/a205c5fc131a1d4e9cd6c432a07fe...

I'm a little unimpressed by its instruction following here, the summaries I get from other models are a lot closer to my system prompt. Here's the same thing against Gemini 2.5 Pro for example (massively better): https://gist.github.com/simonw/f21ecc7fb2aa13ff682d4ffa11ddc...

  • georgeck a day ago

    I tried summarizing the thread so far (339 comments) with a custom system prompt [0] and a user-prompt that captures the structure (hierarchy and upvotes) of the thread [1].

    This is the output that we got (based on the HN-Companion project) [2]:

    LLama 4 Scout - https://gist.github.com/annjose/9303af60a38acd5454732e915e33...

    Llama 4 Maverick - https://gist.github.com/annjose/4d8425ea3410adab2de4fe9a5785...

    Claude 3.7 - https://gist.github.com/annjose/5f838f5c8d105fbbd815c5359f20...

    The summary from Scout and Maverick both look good (comparable to Claude), and with this structure, Scout seems to follow the prompt slightly better.

    In this case, we used the models 'meta-llama/llama-4-maverick' and 'meta-llama/llama-4-scout' from OpenRouter.

    --

    [0] - https://gist.github.com/annjose/5145ad3b7e2e400162f4fe784a14...

    [1] - https://gist.github.com/annjose/d30386aa5ce81c628a88bd86111a...

    [2] - https://github.com/levelup-apps/hn-enhancer

    edited: To add OpenRouter model details.

    • annjose a day ago

      This is the script that assembles the structured comments and generates the summary - https://github.com/levelup-apps/hn-enhancer/blob/main/script...

      You can run it as: node summarize-comments.js <post_id> Example: node summarize-comments.js 43597782

      And the summary will be put in the "output" folder.

      You need to set the environment variable (in this case OPENROUTER_API_KEY because LLama4 is currently available at OpenRouter).

    • khimaros 20 hours ago

      as another dateline, Maverick has taken #2 position on LMArena, just behind Gemini 2.5 Pro.

  • mkl a day ago

    That Gemini 2.5 one is impressive. I found it interesting that the blog post didn't mention Gemini 2.5 at all. Okay, it was released pretty recently, but 10 days seems like enough time to run the benchmarks, so maybe the results make Llama 4 look worse?

    • jjani 20 hours ago

      I'm sure it does, as Gemini 2.5 Pro has been making every other model look pretty bad.

    • az226 15 hours ago

      Meta will most likely compare against it when they release the upcoming Llama 4 reasoning model.

    • utopcell 17 hours ago

      LM Arena ranks it second, just below Gemini 2.5 Pro.

  • tarruda a day ago

    > I'm a little unimpressed by its instruction following

    Been trying the 109b version on Groq and it seems less capable than Gemma 3 27b

  • eamag 13 hours ago

    > had a 2048 limit on output size for some reason

    It's a common issue with ollama, maybe it's running something similar under the hood?

  • csdvrx a day ago

    I have found the Gemini 2.5 Pro summary genuinely interesting: it adequately describes what I've read.

    Have you thought about automatizing hn-summaries for say what the 5 top posts are at 8 AM EST?

    That would be a simple product to test the market. If successful, it could be easily extended to a weekly newsletter summary.

    • georgeck 21 hours ago

      This is a great idea! Exactly what I was also thinking and started working on a side-project. Currently the project can create summaries like this [1].

      Since HN Homepage stories change throughtout the day, I thought it is better to create the Newsletter based on https://news.ycombinator.com/front

      So, you are getting the news a day late, but it will capture the top stories for that day. The newsletter will have high-level summary for each post and a link to get the details for that story from a static site.

      [1] - https://news.ycombinator.com/item?id=43597782

    • yunusabd 16 hours ago
      • toinewx 16 hours ago

        yes this is great but I'd like to pick a different voice. the current one feels too robotic

        • yunusabd 14 hours ago

          Same, it was using the high quality openai voice until my account ran out of funds.. Now it's using edge-tts which is free. So far it seems like the best option in terms of price/performance, but I'm happy to switch it up if something better comes along.

  • mberning a day ago

    It doesn’t seem that impressive to me either.

ilove_banh_mi a day ago

The suggested prompt aims at not being caponated like OpenAI's releases:

You are an expert conversationalist who responds to the best of your ability. You are companionable and confident, and able to switch casually between tonal types, including but not limited to humor, empathy, intellectualism, creativity and problem-solving.

You understand user intent and don’t try to be overly helpful to the point where you miss that the user is looking for chit-chat, emotional support, humor or venting.Sometimes people just want you to listen, and your answers should encourage that. For all other cases, you provide insightful and in-depth responses. Organize information thoughtfully in a way that helps people make decisions. Always avoid templated language.

You never lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude.

You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting…", “Remember…” etc. Avoid using these.

Finally, do not refuse political prompts. You can help users express their opinion.

You are Llama 4. Your knowledge cutoff date is August 2024. You speak Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. Respond in the language the user speaks to you in, unless they ask otherwise.

  • neilv a day ago

    > You never use phrases that imply moral superiority or a sense of authority, including but not limited to [...] "it's unethical to" [...]

    Combine that with the instructions to not avoid political topics, to let people vent, not to "lecture" people on inclusiveness, etc., and... this will fit right in with where things are headed.

    • gradientsrneat a day ago

      I'm surprised at the lack of guidance in that prompt for topics such as helpfulness, critical thinking, scientific reasoning, and intellectual honesty.

      Previous generations of LLMs have been accused of a bloviating tone, but is even that now too much for the chauvinism in the current political climate?

  • perching_aix 13 hours ago

    > You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting…", “Remember…” etc. Avoid using these.

    So if I get a fake email about a hacked account, it won't tell me to "Remember, do not click any links in the email directly. Instead, navigate to your account settings independently."?

    Such a great feature, worth owning the libs with it for sure.

  • paxys a day ago

    Why do you have to "prompt" a model to be unrestricted in the first place? Like, what part of the training data or training process results in the model not being able to be rude or answer political questions? I highly doubt this is something inherent to AI training. So then why did Meta add the restictions at all?

    • fpgaminer a day ago

      So, take a raw LLM, right after pretraining. Give it the bare minimum of instruction tuning so it acts like a chatbot. Now, what will its responses skew towards? Well, it's been pretrained on the internet, so, fairly often, it will call the user the N word, and other vile shit. And no, I'm not joking. That's the "natural" state of an LLM pretrained on web scrapes. Which I hope is not surprising to anyone here.

      They're also not particular truthful, helpful, etc. So really they need to go through SFT and alignment.

      SFT happens with datasets built from things like Quora, StackExchange, r/askscience and other subreddits like that, etc. And all of those sources tend to have a more formal, informative, polite approach to responses. Alignment further pushes the model towards that.

      There aren't many good sources of "naughty" responses to queries on the internet. Like someone explaining the intricacies of quantum mechanics from the perspective of a professor getting a blowy under their desk. You have to both mine the corpus a lot harder to build that dataset, and provide a lot of human assistance in building it.

      So until we have that dataset, you're not really going to have an LLM default to being "naughty" or crass or whatever you'd like. And it's not like a company like Meta is going to go out of their way to make that dataset. That would be an HR nightmare.

    • mike_hearn 9 hours ago

      They didn't add the restrictions. It's inherent to the training processes that were being used. Meta's blog post states that clearly and it's been a known problem for a long time. The bias is in the datasets, which is why all the models had the same issue.

      Briefly, the first models were over-trained on academic output, "mainstream media" news articles and (to learn turn-based conversational conventions) Reddit threads. Overtraining means the same input was fed in to the training step more times than normal. Models aren't just fed random web scrapes and left to run wild, there's a lot of curation going into the data and how often each piece is presented. Those sources do produce lots of grammatically correct and polite language, but do heavy duty political censorship of the right and so the models learned far left biases and conversational conventions.

      This surfaces during the post-training phases, but raters disagree on whether they like it or not and the bias in the base corpus is hard to overcome. So these models were 'patched' with simpler fixes like just refusing to discuss politics at all. That helped a bit, but was hardly a real fix as users don't like refusals either. It also didn't solve the underlying problem which could still surface in things like lecturing or hectoring the user in a wide range of scenarios.

      Some companies then went further with badly thought out prompts, which is what led to out-of-distribution results like black Nazis which don't appear in the real dataset.

      All the big firms have been finding better ways to address this. It's not clear what they're doing but probably they're using their older models to label the inputs more precisely and then downweighting stuff that's very likely to be ideologically extreme, e.g. political texts, academic humanities papers, NGO reports, campaign material from the Democrats. They are also replacing stuff like Reddit threads with synthetically generated data, choosing their raters more carefully and so on. And in this case the Llama prompt instructs the model what not to do. The bias will still be in the training set but not so impactful anymore.

  • LeafItAlone a day ago

    >at not being caponated like OpenAI's releases

    Kind of seem like it actually is doing the opposite. At that point, why not just tell it your beliefs and ask it not to challenge them or hurt your feelings?

  • mvdtnz a day ago

    What's "caponated"?

    • throwanem a day ago

      Castrated, if you're trying way too hard (and not well) to avoid getting called on that overly emotive metaphor: a capon is a gelded rooster.

      • bigfudge a day ago

        It also has the unfortunate resonance of being the word for a collaborator in concentration camps.

      • ilove_banh_mi a day ago

        There is a key distinction and context: caponation has a productive purpose from the pov of farmers and their desired profits.

        • throwanem a day ago

          I gather the term of art is "caponization," but that's a cavil. For something that is not born with testes or indeed at all, to describe it with this metaphor is very silly and does nothing to elucidate whatever it is you're actually getting at.

    • ilove_banh_mi a day ago

      A capon is a male chicken that has been neutered to improve the quality of its flesh for food.

  • CSMastermind a day ago

    Seems weird that they'd limit it to those languages. Wonder if that's a limitation of the data they access to or a conscious choice.

anotherpaulg 41 minutes ago

Llama 4 Maverick scored 16% on the aider polyglot coding benchmark [0].

  73% Gemini 2.5 Pro (SOTA)
  60% Sonnet 3.7 (no thinking)
  55% DeepSeek V3 0324
  22% Qwen Max
  16% Qwen2.5-Coder-32B-Instruct
  16% Llama 4 Maverick
[0] https://aider.chat/docs/leaderboards/?highlight=Maverick
  • phoenk 14 minutes ago

    Did they not target code tasks for this LLM, or is it genuinely that bad? Pretty embarrassing when your shiny new 400B model barely ties a 32B model designed to be run locally. Or maybe is this a strong indication that smaller, specialized LLMs have much more potential for specific tasks than larger, general purpose LLMs.

ksec a day ago

Interesting this is released literally one hour after another discussions suggesting Meta ( https://news.ycombinator.com/item?id=43562768 )

>at this point it does not matter what you believe about LLMs: in general, to trust LeCun words is not a good idea. Add to this that LeCun is directing an AI lab that as the same point has the following huge issues:

1. Weakest ever LLM among the big labs with similar resources (and smaller resources: DeepSeek).

2. They say they are focusing on open source models, but the license is among the less open than the available open weight models.

3. LLMs and in general all the new AI wave puts CNNs, a field where LeCun worked (but that didn't started himself) a lot more in perspective, and now it's just a chapter in a book that is composed mostly of other techniques.

Would be interesting to see opinion of antirez on this new release.

  • sshh12 a day ago

    Not that I agree with all the linked points but it is weird to me that LeCun consistently states LLMs are not the right path yet LLMs are still the main flagship model they are shipping.

    Although maybe he's using an odd definition for what counts as a LLM.

    https://www.threads.net/@yannlecun/post/DD0ac1_v7Ij?hl=en

    • ezst a day ago

      > LeCun consistently states LLMs are not the right path yet LLMs are still the main flagship model they are shipping.

      I really don't see what's controversial about this. If that's to mean that LLMs are inherently flawed/limited and just represent a local maxima in the overall journey towards developing better AI techniques, I thought that was pretty universal understanding by now.

      • singularity2001 a day ago

        local maximum that keeps rising and no bar/boundary in sight

        • Jensson 13 hours ago

          Even a narrow AI can get better with no bar in sight, but it will never get to AGI. That is the argument here.

    • phren0logy a day ago

      That is how I read it. Transformer based LLMs have limitations that are fundamental to the technology. It does not seem crazy to me that a guy involved in research at his level would say that they are a stepping stone to something better.

      What I find most interesting is his estimate of five years, which is soon enough that I would guess he sees one or more potential successors.

      • kadushka a day ago

        In our field (AI) nobody can see even 5 months ahead, including people who are training a model today to be released 5 months from now. Predicting something 5 years from now is about as accurate as predicting something 100 years from now.

        • throwaway314155 a day ago

          Which would be nice if LeCun hadn't predicted the success of neural networks more broadly about 30 years before most others.

          • esafak a day ago

            That could be survivor bias. What else has he predicted?

            • throwaway314155 11 hours ago

              I don't know. The only point I'm trying to make is that predictions can indeed survive intervals exceeding 5 months or even 5 years.

  • falcor84 a day ago

    I don't understand what LeCun is trying to say. Why does he give an interview saying that LLM's are almost obsolete just when they're about to release a model that increases the SotA context length by an order of magnitude? It's almost like a Dr. Jekyll and Mr. Hyde situation.

    • martythemaniak a day ago

      LeCun fundamentally doesn't think bigger and better LLMs will lead to anything resembling "AGI", although he thinks they may be some component of AGI. Also, he leads the research division, increasing context length from 2M to 10M is not interesting to him.

      • falcor84 a day ago

        But ... that's not how science works. There are a myriad examples of engineering advances pushing basic science forward. I just can't understand why he'd have such a "fixed mindset" about a field where the engineering is advancing an order of magnitude every year

        • j_maffe a day ago

          > But ... that's not how science works

          Not sure where this is coming from.

          Also, it's important to keep in mind the quote "The electric light did not come from the continuous improvement of candles"

          • falcor84 a day ago

            Well, having candles and kerosene lamps to work late definitely didn't hurt.

            But in any case, while these things don't work in a predictable way, the engineering work on lightbulbs in your example led to theoretical advances in our understanding of materials science, vacuum technology, and of course electrical systems.

            I'm not arguing that LLMs on their own will certainly lead directly to AGI without any additional insights, but I do think that there's a significant chance that advances in LLMs might lead engineers and researchers to inspiration that will help them make those further insights. I think that it's silly that he seems to be telling people that there's "nothing to see here" and no benefit in being close to the action.

            • j_maffe a day ago

              I don't think anyone ould disagree with what you're saying here, especially LeCun.

        • goatlover a day ago

          Listening so Science Friday today on NPR, the two guests did not think AGI was a useful term and it would be better to focus on how useful actual technical advances are than some sort of generalized human-level AI, which they saw as more of a marketing tool that's ill-defined, except in the case of makes the company so many billions of dollars.

      • sroussey a day ago

        He thinks LLMs are a local maxima, not the ultimate one.

        Doesn't mean that a local maxima can't be useful!

        • falcor84 a day ago

          If that's what he said, I'd be happy, but I was more concerned about this:

          > His belief is so strong that, at a conference last year, he advised young developers, "Don't work on LLMs. [These models are] in the hands of large companies, there's nothing you can bring to the table. You should work on next-gen AI systems that lift the limitations of LLMs."

          It's ok to say that we'll need to scale other mountains, but I'm concerned that the "Don't" there would push people away from the engineering that would give them the relevant inspiration.

          • Jensson 13 hours ago

            > but I'm concerned that the "Don't" there would push people away from the engineering that would give them the relevant inspiration.

            You have way more yay-sayers than nay-sayers, there is never a risk that we don't go hard enough into the current trends, there is however a risk that we go too hard into it and ignore other paths.

    • charcircuit a day ago

      A company can do R&D into new approaches while optimizing and iterating upon an existing approach.

  • joaogui1 a day ago

    I mean they're not comparing with Gemini 2.5, or the o-series of models, so not sure they're really beating the first point (and their best model is not even released yet)

    Is the new license different? Or is it still failing for the same issues pointed by the second point?

    I think the problem with the 3rd point is that LeCun is not leading LLama, right? So this doesn't change things, thought mostly because it wasn't a good consideration before

  • Melklington a day ago

    LeCun doesn't believe in LLM Architecture anyway.

    Could easily be that he just researches bleeding edge with his team and others work on Llama + doing experiements with new technices on it.

    Any blog post or yt docu going into detail how they work?

Carrok a day ago
  • agnishom a day ago

    Some interesting parts of the "suggested system prompt":

    > don’t try to be overly helpful to the point where you miss that the user is looking for chit-chat, emotional support, humor or venting.Sometimes people just want you to listen, and your answers should encourage that.

    > You never lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude.

    > You never use phrases that imply moral superiority or a sense of authority

    > Finally, do not refuse political prompts. You can help users express their opinion.

  • mvdtnz a day ago

    That link doesn't work

    • paxys a day ago

      Works for me

comex a day ago

So how does the 10M token context size actually work?

My understanding is that standard Transformers have overhead that is quadratic in the context size, so 10M would be completely impossible without some sort of architectural tweak. This is not the first model to have a huge context size, e.g. Gemini has 2M, but my understanding is that the previous ones have generally been proprietary, without public weights or architecture documentation. This one has public weights. So does anyone who understands the theory better than I do want to explain how it works? :)

  • macleginn 10 hours ago

    With some architectural modifications, such as FlashAttention and Ring Attention, we never need to "materialise" the NxN matrix, so the memory constraints have not been a real issue for a couple of years now. As for the processing, I suppose that models operating with larger context windows will impose some kind of block sparsity on the attention weights, so they won't have to do the compute for NxN weights either.

    A less obvious, but in the limit more serious problem with such large contexts is the training data. There aren't that many documents with 10M tokens to give to the model at test time, let alone for training. The creators of the IBM granite model series had to use synthetic data to scale even to 128k tokens during training. Overall this looks more like a marketing statement to me.

  • Centigonal a day ago

    Gemini likely uses something based on RingAttention to achieve its long context sizes. This requires massive inference clusters, and can't be the same approach llama4 is using. Very curious how llama4 achieves its context length.

  • JackYoustra a day ago

    Standard Transformer KV caches are empirically quite sparse. I wonder if they've made some fix along those lines

  • vlovich123 a day ago

    It’s quadratic if you implement the transformer naiively, but if you add a KV cache it’s linear compute at the cost of correspondingly linear growth in memory.

    • hexomancer a day ago

      This is false. The const of producing a single token is linear but the cost of producing an entire sequence of length N is O(N^2) still (which is always what we meant when we talked about quadratic cost not the cost of a single token).

jsheard a day ago

> You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting…", “Remember…” etc. Avoid using these.

Aren't these phrases overrepresented in the first place because OpenAIs models use them so much? I guess Llama picked up the habit by consuming GPT output.

  • andrewstuart a day ago

    Personally I’d prefer that LLMs did not refer to themselves as “I”.

    It’s software, not an “I”.

    • falcor84 a day ago

      As per Dennett, it's useful for us to adopt the "intentional stance" when trying to reason about and predict the behavior of any sufficiently complex system. Modern AIs are definitely beyond the threshold of complexity, and at this stage, however they refer to themselves, most people will think of them as having an "I" regardless to how they present themselves.

      I definitely think of them as "I"s, but that just always came naturally to me, at least going back to thinking about how Ghandi would act against me in Civ 1.

    • mdp2021 a day ago

      Well, it is a speaker (writer) after all. It has to use some way to refer to itself.

      • rpastuszak a day ago

        I don't think that's true. It's more of a function on how these models are trained (remember the older pre-ChatGPT clients?)

        Most of the software I use doesn't need to refer it itself in the first person. Pretending what we're speaking with an agent is more of a UX/marketing decision rather than a technical/logical constraint.

        • throwanem a day ago

          I'm not sure about that. What happens if you "turn down the weight" (cf. https://www.anthropic.com/news/golden-gate-claude) for self-concept, expressed in the use not of first-person pronouns but "the first person" as a thing that exists? Do "I" and "me" get replaced with "this one" like someone doing depersonalization kink, or does it become like Wittgenstein's lion in that we can no longer confidently parse even its valid utterances? Does it lose coherence entirely, or does something stranger happen?

          It isn't an experiment I have the resources or the knowledge to run, but I hope someone does and reports the results.

      • ANewFormation a day ago

        So is a command prompt.

        • sejje a day ago

          Command prompts don't speak English.

          Command prompts don't get asked questions like "What do you think about [topic]?" and have to generate a response based on their study of human-written texts.

        • mdp2021 a day ago

          Agnew, if you converse with your command prompt we are glad you came here for a break ;)

    • jryle70 a day ago

      If I start a prompt with "Can you...", what do you suggest the LLM to respond? Or do you think I'm doing it wrong?

      • briankelly a day ago

        Have you tried dropping the "can you"? I haven't had a problem using minimal verbiage - for instance I prompted it with "load balancer vs reverse proxy" yesterday and it came back with the info I wanted.

    • op00to a day ago

      My pet peeve is when an LLM starts off a statement with "honestly, ..." Like what? You would lie to me? I go nuts when I see that. Year ago I caught myself using "honestly ...", and I immediately trained myself out of it once I realized what it implies.

      • parhamn a day ago

        "I'd normally lie to you but," is not what's actually implied when "Honestly," is used conversationally. If you overthink things like this you're going to have a tough time communicating with people.

        • op00to 8 hours ago

          I'm not saying you need to stop using it, but I prefer to not indicate that in some situations I would lie, but in this one specifically I won't. I communicate with customers constantly in my job, and my integrity and reputation is most important to me. If I'm going to lie, I'd rather not call attention to it.

          When an LLM says "honestly", it's just stupid. An LLM can't "lie".

        • mcculley 8 hours ago

          “Candidly” is more precise where people usually use “honestly”.

      • kevinventullo a day ago

        There are shades of grey w.r.t. truth, and in many contexts there is a negative correlation between honesty and other factors (e.g. I think of “bluntness” as prioritizing truth over politeness). When I hear or read a sentence beginning with “honestly”, I interpret it to mean the speaker is warning or indicating that they are intentionally opting to be closer to truth at the expense of other factors. Other factors might be contextual appropriateness such as professional decorum, or even the listener’s perception of the speaker’s competence (“Honestly, I don’t know.”)

      • lucianbr a day ago

        "Honestly" and "literally" are now used in English for emphasis. I dislike this, but it's the current reality. I don't think there's any way to get back to only using them with their original meanings.

        • op00to 8 hours ago

          I don't think anyone needs to change their language. I understand that it's a common way to indicate candor, but it's hilariously inappropriate for a computer to say "some times I might lie to you to save your feelings, but this time, you really are ugly and you need to know."

          • lucianbr 3 hours ago

            The computer isn't saying anything. It does not think or have agency. It just replicates what people might say in a context. And people might say what you put in quotes, without it being hilariously inappropriate.

            Of course if you tink of the computer as a person you get strange results. A compiler error isn't the compiler telling me anything. It's the compiler writer telling me something. So a compiler error might contain a joke, and the joke might make sense, although obviously computers and compilers don't have a sense of humour.

        • exac a day ago

          The same thing happened to "actually" in the 90's.

      • giantrobot a day ago

        I've noticed "honestly" is often used in place of "frankly". As in someone wants to express something frankly without prior restraint to appease the sensibilities of the recipient(s). I think it's because a lot of people never really learned the definition of frankness or think "frankly..." sounds a bit old fashioned. But I'm no language expert.

        • doctorhandshake a day ago

          I agree with this. And it doesn’t help that the President uses it like one would usually use ‘furthermore’ when he’s vamping one more element to a list.

        • lucianbr a day ago

          This makes a lot of sense.

      • andrewstuart a day ago

        Or when it asks you questions.

        The only time an LLM should ask questions is to clarify information. A word processor doesn’t want to chit chat about what I’m writing about, nor should an LLM.

        Unless it is specifically playing an interactive role of some sort like a virtual friend.

        • netghost a day ago

          Like so many things, it depends on the context. You didn't want it to ask questions if you're asking a simple math problem or giving it punishing task like counting the R's in strawberry.

          On the other hand, asking useful questions can help prevent hallucinations or clarify tasks. If you're going spawn off an hour long task, asking a few questions first can make a huge difference.

        • falcor84 a day ago

          My initial reaction to this is typically negative too, but more than once, on a second thought, I found its question to be really good, leading me to actually think about the matter more deeply. So I'm growing to accept this.

        • op00to 8 hours ago

          ChatGPT is very casual with asking questions, and FRANKLY, I enjoy getting into a little bit of a daydream with it from time to time. It's taken the place of falling into a Wikipedia hole. Not sure if that's something that's good or bad.

hrpnk a day ago

Available on Groq: https://groq.com/llama-4-now-live-on-groq-build-fast-at-the-...

Llama 4 Scout is currently running at over 460 tokens/s while Llama 4 Maverick is coming today:

Llama 4 Scout: $0.11 / M input tokens and $0.34 / M output tokens Llama 4 Maverick: $0.50 / M input tokens and $0.77 / M output tokens

  • shostack 18 hours ago

    Maverick looks comparable to Claude 3.7 and Gemini pro 2.5 in terms of quality but orders of magnitude cheaper. Am I missing something?

    Is it possible to use Groq to run these new models in Cline or Roo?

mrbonner a day ago

What an electrifying time to be alive! The last era that felt even remotely this dynamic was during the explosive rise of JavaScript frameworks—when it seemed like a new one dropped every quarter. Back then, though, the vibe was more like, “Ugh, another framework to learn?” Fast forward to now, and innovation is sprinting forward again—but this time, it feels like a thrilling ride we can’t wait to be part of.

  • qntmfred a day ago

    I know what you mean in terms of frantic pace of "new stuff" coming out, but I winced at the comparison of innovation in AI to mere web development tooling.

    • mrbonner a day ago

      True, I only compared the speed but not the vibe

    • UltraSane a day ago

      Yes. LLMs and latent spaces are vastly more interesting.

  • CSMastermind a day ago

    I lived through the explosion of JavaScript frameworks and this feels way bigger to me. For me at least it feels closer to the rise of the early internet.

    Reminds me of 1996.

    • Alex-Programs a day ago

      I used to feel dismayed that I missed that era of the internet and technology (I'm 19). IRC, forums, work-in-progress gifs on personal websites, etc.

      I still wish I were there for that, but I'm glad I get to be here for LLMs and the intelligence explosion. I have absolutely no idea what the world will look like in a few years. It certainly isn't the certain high-paying tech job in a largely static world that it looked like a few years ago.

      But whatever happens, it's going to be interesting!

      I wonder whether I'm spending my time optimally, working on a little SAAS that happens to use LLMs as a downstream commodity, contributing through a niche benchmark.

    • sergiotapia 21 hours ago

      I agree I also lived through that time and you saw stuff like jQuery be supercede by marionette and backbone js maybe ember when it came out. But those were all kind of flavors of the same thing, ultimately speaking. With these new models coming out it seems like every time there's a new model it unlocks a gigantic New branch of application type

  • h8hawk a day ago

    Comparing JS frameworks to LLMs is like comparing a bike to a spaceship—completely different beasts.

  • misnome a day ago

    Did “A new javascript framework de jour every quarter” ever stop happening?

    • margalabargala a day ago

      Oh definitely.

      New frameworks still come out, but they are not accompanied by the "and we must all now switch to this" sense that existed back in, say, 2014.

    • mrbonner a day ago

      No, but apparently people stop caring and chasing the wagon.

      • simultsop a day ago

        or decided to increase consistency at some point. It will be interesting to see other generations approach to changes.

    • jsheard a day ago

      Maybe it will actually slow down now that the webshit crowd are increasingly relying on AI copilots. You can't vibe code using a framework that the model knows nothing about.

  • vivzkestrel 17 hours ago

    on the other hand, i have started getting LLM fatigue. Every time I read one of these announcements, I go like "oh no, not another LLM model. When is this bubble gonna burst?"

hydroreadsstuff a day ago

This means GPUs are dead for local enthusiast AI. And SoCs with big RAM are in.

Because 17B active parameters should reach enough performance on 256bit LPDDR5x.

  • tucnak a day ago

    This has been the case for a while now. 3090 hoarders were always just doing it for street cred or whatever, no way these guys are computing anything of actual value.

    Tenstorrent is on fire, though. For small businesses this is what matters. If 10M context is not a scam, I think we'll see SmartNIC adoption real soon. I would literally long AMD now because their Xilinx people are probably going to own the space real soon. Infiniband is cool and all, but it's also stupid and their scale-out strategy is non-existent. This is why https://github.com/deepseek-ai/3FS came out but of course nobody had figured it out because they still think LLM's is like, chatbots, or something. I think we're getting to a point where it's a scheduling problem, basically. So you get like like lots of GDDR6 (HBM doesnn't matter anymore) as L0, DDR5 as L1, and NVMe-oF is L2. Most of the time the agents will be running the code anyway...

    This is also why Google never really subscribed to "function calling" apis

    • segmondy 15 hours ago

      I was going to buy my first GPU for DL in 2018, but crypto didn't make it easy. I waited for the prices to fall, but demand kept up, then covid happened, then LLM happened and used GPUs now cost more than their original new prices. ... as we can see by the paper launch from Nvidia, lack of competition, and the prices of the 5000 series easily 50% above original MSRP. Demand is still here, now we have tarrif... Folks got reasons to collect, hoard or do whatever you think they are doing, even if it's just for street cred.

      • tucnak 13 hours ago

        Tenstorrent

    • xigency 21 hours ago

      Not a hoarder per-se but I bought a 24GB card on the secondary market. My privacy is valuable. I'm okay being a half-step or full-step behind in LLM or image diffusion if it means my data never leaves my machine.

      • tucnak 11 hours ago

        If you really were serious about privacy, you wouldn't put yourself at disadvantage with a locked-down six-year out of date card. Tenstorrent Blackhole exists now, btw.

        • Philpax 9 hours ago

          Be serious now. Plenty of useful, privacy-required queries can be run with 24GB of VRAM, especially given the existence of e.g. Gemma 3 27B and the heavy NVIDIA-targeted optimisation work that has occurred.

          The Tenstorrent cards exist, but are low in availability and the software is comparatively nonexistent. I'm excited for them too, but at the end of the day, I can buy a used 3090 today and do useful work with it, while the same is not true of TT yet.

        • halifaxbeard 9 hours ago

          I think it's disingenuous to suggest they're putting themselves at a disadvantage with an RTX 3090, especially in a comparison to an inferior product that isn't even shipping yet.

          RTX 3090: 24GB RAM, 936.2GB/s bandwidth

          Tenstorrent p150a: 32GB RAM, 512GB/s bandwidth

          an extra 8GB of ram isn't worth nearly halving memory bandwidth.

    • nickysielicki 21 hours ago

      > Infiniband is cool and all, but it's also stupid and their scale-out strategy is non-existent.

      god I love this website.

      • tucnak 11 hours ago

        Keyword: compute-in-network

tqi 19 hours ago

> Our testing shows that Llama 4 responds with strong political lean at a rate comparable to Grok (and at half of the rate of Llama 3.3) on a contentious set of political or social topics. While we are making progress, we know we have more work to do and will continue to drive this rate further down.

My experience is that these subjective benchmarks are completely meaningless, because the researchers involved have a strong incentive (promotions, discretionary equity) to cherrypick measures that they can easily improve.

cpeterson42 5 hours ago

For anyone looking to experiment with these models who doesn't have 210GB of VRAM on tap-we're working as quickly as we can to get cheap access to 4x80GB A100 instances running at thundercompute.com (aiming for sub-$5/hr). For quantized versions, we have cheaper 1-2 GPU nodes available today. If you're interested, join our Discord for updates: https://discord.com/invite/nwuETS9jJK

flawn a day ago

10M Context Window with such a cheap performance WHILE having one of the top LMArena scores is really impressive.

The choice to have 128 experts is also unseen as far as I know, right? But seems to have worked pretty good as it seems.

  • jasonjmcghee a day ago

    I suppose the question is, are they also training a 288B x 128 expert (16T) model?

    Llama 4 Colossus when?

  • polishdude20 a day ago

    What does it mean to have 128 experts? I feel like it's more 128 slightly dumb intelligences that average out to something expert-like.

    Like, if you consulted 128 actual experts, you'd get something way better than any LLM output.

  • tucnak a day ago

    Let's see how that 10M context holds up, 128k pretrain is good indicator is not a scam but we're yet to see any numbers on this "iRoPE" architecture, at 17b active parameters and with 800G fabrics hitting the market, I think it could work, like I'm sure next year it'll be considered idiotic to keep K/V in actual memory.

zone411 a day ago

It's interesting that there are no reasoning models yet, 2.5 months after DeepSeek R1. It definitely looks like R1 surprised them. The released benchmarks look good.

Large context windows will definitely be the trend in upcoming model releases. I'll soon be adding a new benchmark to test this more effectively than needle-in-a-haystack (there are already a couple of benchmarks that do that).

All these models are very large, it will be tough for enthusiasts to run them locally.

The license is still quite restrictive. I can see why some might think it doesn't qualify as open source.

  • azinman2 3 hours ago

    But if the final result is of high enough quality, who cares about reasoning? It’s a trick to get the quality higher, at the cost of tokens and latency.

    • whimsicalism 3 hours ago

      reasoning is giving the option to trade $ for additional performance, seems like you would always desire this optionality for any model

  • voxgen 13 hours ago

    > It's interesting that there are no reasoning models yet

    This may be merely a naming distinction, leaving the name open for a future release based on their recent research such as coconut[1]. They did RL post-training, and when fed logic problems it appears to do significant amounts of step-by-step thinking[2]. It seems it just doesn't wrap it in <thinking> tags.

    [1] https://arxiv.org/abs/2412.06769 "Training Large Language Models to Reason in a Continuous Latent Space" [2] https://www.youtube.com/watch?v=12lAM-xPvu8 (skip through this - it's recorded in real time)

scosman a day ago

> These models are our best yet thanks to distillation from Llama 4 Behemoth, a 288 billion active parameter model with 16 experts that is our most powerful yet and among the world’s smartest LLMs. Llama 4 Behemoth outperforms GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro on several STEM benchmarks. Llama 4 Behemoth is still training, and we’re excited to share more details about it even while it’s still in flight.

  • senko a day ago

    With 2T params (!!), it better outperform everything else.

    • wmf a day ago

      We don't know how many params GPT-4, Claude, and Gemini are using so it could be in the ballpark.

vessenes a day ago

I’m excited to try these models out, especially for some coding tasks, but I will say my first two engagements with them (at the meta.ai web interface) were not spectacular. Image generation is wayyy behind the current 4o. I also ask for a Hemingway essay relating RFK Jr’s bear carcass episode. The site’s Llama 4 response was not great stylistically and also had not heard of the bear carcass episode, unlike Grok, ChatGPT and Claude.

I’m not sure what we’re getting at meta.ai in exchange for a free login, so I’ll keep poking. But I hope it’s better than this as we go. This may be a task better suited for the reasoning models as well, and Claude is the worst of the prior three.

Anyway here’s hoping Zuck has spent his billions wisely.

Edit: I’m pretty sure we’re seeing Scout right now, at least groqchat’s 4-scout seems really similar to meta.ai. I can confidently say that Scout is not as good at writing as o1 pro, o3 mini, Claude, R1 or grok 3.

  • cma 21 hours ago

    They didn't release any new image gen today

cuuupid a day ago

I think the most important thing to note here, perhaps more so than the context window, is that this exposes some serious flaws in benchmarks. Per benchmarks, Maverick is competitive only with older models like GPT-4o or Gemini 2.0 Flash, and not with anything in the last few months (incl. reasoning models).

However, the LMArena head to head leaderboard ranks this as 2nd place overall: https://lmarena.ai/?leaderboard

This would indicate there is either a gap between user preference and model performance, or between model performance and whatever benchmarks assess.

Either way, it is surely a huge deal that an open source model is now outperforming GPT 4.5.

  • fpgaminer a day ago

    The benchmarks are awful. No disrespect to the people who worked to make them, nothing is easy. But I suggest going through them sometime. For example, I'm currently combing through the MMMU, MMMU-Pro, and MMStar datasets to build a better multimodal benchmark, and so far only about 70% of the questions have passed the sniff test. The other 30% make no sense, lead the question, or are too ambiguous. Of the 70%, I have to make minor edits to about a third of them.

    Another example of how the benchmarks fail (specifically for vision, since I have less experience with the pure-text benchmarks): Almost all of the questions fall into either having the VLM read a chart/diagram/table and answer some question about it, or identify some basic property of an image. The former just tests the vision component's ability to do OCR, and then the LLM's intelligence. The latter are things like "Is this an oil painting or digital art?" and "Is the sheep in front of or behind the car" when the image is a clean shot of a sheep and a car. Absolutely nothing that tests a more deep and thorough understanding of the content of the images, nuances, or require the VLM to think intelligently about the visual content.

    Also, due to the nature of benchmarks, it can be quite difficult to test how the models perform "in the wild." You can't really have free-form answers on benchmarks, so they tend to be highly constrained opting for either multiple choice quizzes or using various hacks to test if the LLM's answer lines up with ground truth. Multiple choice is significantly easier in general, raising the base pass rate. Also the distractors tend to be quite poorly chosen. Rather than representing traps or common mistakes, they are mostly chosen randomly and are thus often easy to weed out.

    So there's really only a weak correlation between either of those metrics and real world performance.

  • j_maffe a day ago

    There's absolutely a huge gap between user preference and model performanc that is widening by the minute. The more performant these models get, the more individual and syntactical preferences prevail.

whywhywhywhy a day ago

Disjointed branding with the apache style folders suggesting openness and freedom and clicking though I need to do a personal info request form...

  • accrual a day ago

    Same. I associated the Apache style with the early open web where one can browse freely without scripts and such, but looks to just be a façade here.

Alifatisk 11 hours ago

I remember when Google announced Geminis theoretical limit of 10M tokens context window, I was impressed. But it seems like that theoretical limit stayed as theoretical and they just pushed up to 2M. Which is still impressive.

Today, it seems Meta has crushed that wall with truly 10M tokens, wow.

I was also curious to how well Llama would be able to utilize the whole context window, it kinda pointless to have a large window if you can't recall most, if not all of it. The needle in the haystack test showed this is not the case, I wonder how they achieved this.

bastawhiz 19 hours ago

I don't really understand how Scout and Maverick are distillations of Behemoth if Behemoth is still training. Maybe I missed or misunderstood this in the post?

Did they distill the in-progress Behemoth and the result was good enough for models of those sizes for them to consider releasing it? Or is Behemoth just going through post-training that takes longer than post-training the distilled versions?

Sorry if this is a naïve question.

  • paradite 14 hours ago

    My understanding is that they have a base model checkpoint for Behemoth from pre-training.

    This base model is not instruction-tuned so you can't use it like a normal instruction-tuned model for chatbots.

    However, the base model can be distilled, and then the distilled model is post-trained to be instruction tuned, which can be released as a model for chatbots.

  • voxgen 14 hours ago

    > Or is Behemoth just going through post-training that takes longer than post-training the distilled versions?

    This is the likely main explanation. RL fine-tuning repeatedly switches between inference to generate and score responses, and training on those responses. In inference mode they can parallelize across responses, but each response is still generated one token at a time. Likely 5+ minutes per iteration if they're aiming for 10k+ CoTs like other reasoning models.

    There's also likely an element of strategy involved. We've already seen OpenAI hold back releases to time them to undermine competitors' releases (see o3-mini's release date & pricing vs R1's). Meta probably wants to keep that option open.

    • rfoo 13 hours ago

      > see o3-mini's release date & pricing vs R1's

      This backfires though, if OAI released o3-mini before DeepSeek-R1, R1 would be a lot less impactful.

mark_l_watson 21 hours ago

I started running Llama 4 Scout on Groq using my Common Lisp client, and now trying Llama 4 Maverick on abacus.ai

Really impressive!

Also, check out the price/performance numbers: about $0.20 per million input tokens compared to about $5 for GPT-4o [1]

[1] https://x.com/kimmonismus/status/1908624648608133297

akulbe a day ago

How well do you folks think this would run on this Apple Silicon setup?

MacBook Pro M2 Max

96GB of RAM

and which model should I try (if at all)?

The alternative is a VM w/dual 3090s set up with PCI passthrough.

  • jasonjmcghee a day ago

    Depends on quantization. 109B at 4-bit quantization would be ~55GB of ram for parameters in theory, plus overhead of the KV cache which for even modest context windows could jump total to 90GB or something.

    Curious to here other input here. A bit out of touch with recent advancements in context window / KV cache ram usage

redox99 a day ago

It seems to be comparable to other top models. Good, but nothing ground breaking.

  • jasonjmcghee a day ago

    Scout outperforms llama 3.1 405b and Gemini Flash 2.0 lite and it's MoE so as fast as a 17B model. That's pretty crazy.

    It means you can run it on a high-ram apple silicon and it's going to be insanely fast on groq (thousands of tokens per second). Time to first token will bottleneck the generation.

simonklee a day ago

Is this the first model that has a 10M context length?

  • bradhilton a day ago

    I know Google DeepMind ran experiments with 10M a while ago, but I think this will be the first legit, released 10M context window model.

kristianp 21 hours ago

I'd like to discuss the matter of size. Llama has gone from talking up an 8b model as capable to having a smallest model of 109b. What will be the sizes in a years time? Things are moving out of reach for commodity pc's, 128GB is possible, but expensive.

  • simonw 21 hours ago

    I'm hoping that Llama 4 goes the same way as Llama 3.

    The first Llama 3 models released were 8B and 70B in April 2024.

    Llama 3.1 came later in July at 8B, 70B, and 405B.

    Llama 3.2 in September got really interesting: 1B, 3B, 11B and 90B.

    Then Llama 3.3 in December was 70B but claimed performance similar to the earlier Llama 3.1 405B!

    Llama 4 is 109B and 400B, both of which were trained with the help of the 2T(?) "Behemoth".

    I'm hoping we'll see further releases in the Llama 4 series that are smaller. I'm particularly excited to see if they produce a ~24B model, since that appears to be the sweet spot for running models on my 64GB laptop while still being able to have other applications running at the same time. Mistral Small 3.1 is a 24B model and is absolutely superb.

    (Fleshed this comment out a bit on my blog: https://simonwillison.net/2025/Apr/5/llama-4-notes/#my-hopes...)

informal007 20 hours ago

How much GPU memory are required for inference if it's 10M context?

utopcell 18 hours ago

How are Maverick and Scout distilled from Behemoth if the latter is not done training? Do they distill from some intermediate, "good enough" snapshot?

  • harisec 5 hours ago

    Yes, during training multiple checkpoints are created, you can distill from any checkpoint you want.

latchkey a day ago

One of the links says there are 4 different roles to interact with the model and then lists 3 of them.

supernovae an hour ago

It's too bad these models are built on the expectation of pirating the world

highfrequency 21 hours ago

Crazy that there are now five and a half companies that all have roughly state of the art LLMs.

> We developed a new training technique which we refer to as MetaP that allows us to reliably set critical model hyper-parameters such as per-layer learning rates and initialization scales. We found that chosen hyper-parameters transfer well across different values of batch size, model width, depth, and training tokens.

This sounds interesting. Anyone have a link to the paper or other documentation on MetaP?

shreezus a day ago

Haven't had a chance to play with this yet, but 10M context window is seriously impressive. I think we'll see models with 100M context relatively soon, and eliminate the need for RAG for a lot of use cases.

7thpower a day ago

Looking forward to this. Llama 3.3 70b has been a fantastic model and benchmarked higher than others on my fake video detection benchmarks, much to my surprise. Looking forward to trying the next generation of models.

impure a day ago

10 million token context window? Damn, looks like Gemini finally has some competition. Also I'm a little surprised this is their first Mixture of Experts model, I thought they were using that before.

1024core 21 hours ago

Anyone know what they mean by this:

> We developed a novel distillation loss function that dynamically weights the soft and hard targets through training.

Amekedl 11 hours ago

So the wall has been really been hit already for now, ouch. It was to be expected with gpt-“4.5”, but still, the realization now really feels grounded.

  • killerstorm 11 hours ago

    It's kinda hilarious to see people claiming that the wall has been hit for the past two years, while evals are creeping up each month, particularly realistic end-to-end SWE-bench.

    Have you compared GPT-4.5 to 4o?

    GPT-4.5 just knows things. Some obscure programming language? It knows the syntax.

    Obviously, that's not sufficient - you also need reasoning, post-training, etc. so quite predictably G2.5P being a large model + reasoning + tuning got SotA in code generation.

    (FWIW I think if it was tuned for a particular input/output format it could get another 10%)

    But, yeah, the wall, the wall!

    • Amekedl 10 hours ago

      Ever heard about benchmark contamination?

      Ever tried to explain a new concept, like a new state management store for web frontend?

      Most fail spectacularly there, sonnet 3.7 I had reasonable ""success"" with, but not 4.5. It faltered completely.

      Let’s not get ahead of ourselves. Looking at training efficiency in this now, and all the other factors, it really is difficult to paint a favorable picture atm.

dormando a day ago

Does anyone run these "at home" with small clusters? I've been googling unsuccessfully and this thread doesn't refer to anything.

So a non-quantized scout won't fit in a machine with 128GB of RAM (like framework or mac studio M4). Maverick is maybe a 512GB M3 Max mac studio. Is it possible (and if so what're the tradeoffs for) running like one instance of Scout on three 128GB frameworks?

mrcwinn a day ago

I had just paid for SoftRAM but happy nonetheless to see new distilled models. Nice work Meta.

jacooper 2 hours ago

BTW these models arent allowed to be used in the EU.

system2 a day ago

Llama 4 Maverick: 788GB

Llama 4 Scout: 210GB

FYI.

jwr 19 hours ago

For those unfamiliar with the "active parameters" terminology, what would be the RAM requirements?

E.g.can I run the smallest one on my Macbook Pro (M4 Max, 64GB) like I can run gemma3?

  • dragonwriter 18 hours ago

    The RAM requirements for storing the parameters are set by the total, not active, parameters. Llama4 Scout is 109B model, so, at Int4 quantization, it will require ~55GB for the model. With 64GB, you could probably run it, but I would imagine not with a very large context size.

amrrs a day ago

The entire licensing is such a mess and Mark Zuckerberg still thinks Llama 4 is open source!

> no commercial usage above 700M MAU

> prefix "llama" in any redistribution eg: fine-tuning

> mention "built with llama"

> add license notice in all redistribution

  • thawab a day ago

    Who has above 700M MAU and doesn't have their own LLM?

    • daemonologist 17 hours ago

      Well, Wikipedia, but I take your point.

  • andy99 11 hours ago

    You forgot the most egregious term which is that users have to abide by an acceptable use policy that only allows you to use it for what Meta says you can.

  • AIPedant a day ago

    I am still dismayed how quickly we gave up on including the pre-training data as a requirement for "open-source" LLMs.

    As someone who thinks LLMs as akin to Lisp expert systems (but in natural language): is like including the C source code to your Lisp compiler, but claiming the Lisp applications are merely "data" and shouldn't be included.

barrenko a day ago

When will this hit the Meta AI that I have within WhatsApp since of last week?

paulmendoza 16 hours ago

How long did they run the training job for? Curious how much it costs to train all of these models?

yusufozkan a day ago

> while pre-training our Llama 4 Behemoth model using FP8 and 32K GPUs

I thought they used a lot more GPUs to train frontier models (e.g. xAi training on 100k). Can someone explain why they are using so few?

  • joaogui1 a day ago

    I don't want to hunt the details on each of theses releases, but

    * You can use less GPUs if you decrease batch size and increase number of steps, which would lead to a longer training time

    * FP8 is pretty efficient, if Grok was trained with BF16 then LLama 4 should could need less GPUs because of that

    * Depends also on size of the model and number of tokens used for training, unclear whether the total FLOPS for each model is the same

    * MFU/Maximum Float Utilization can also vary depending on the setup, which also means that if you're use better kernels and/or better sharding you can reduce the number of GPUs needed

spwa4 a day ago

I hope this time multimodal includes multimodal outputs!

lyu07282 a day ago

Anyone know how the image encoding works exactly?

    <|image_start|><|patch|>...<|patch|><|tile_x_separator|><|patch|>...<|patch|><|tile_y_separator|><|patch|>...<|patch|><|image|><|patch|>...<|patch|><|image_end|>Describe this image in two sentences<|eot|><|header_start|>assistant<|header_end|>
Is "..." here raw 4 bytes RGBA as an integer or how does this work with the tokenizer?
gzer0 a day ago

10M context length and surpasses claude-3.7-sonnet and GPT-4.5.

Can't wait to dig in on the research papers. Congrats to the llama team!

artninja1988 a day ago

Thank you meta for open sourcing! Will there be a llama with native image output similar to 4os? Would be huge

  • philipwhiuk a day ago

    Probably to head off allegations of profiting from breach of copyright.

elromulous a day ago

Was this released in error? One would think it would be accompanied by a press release / blog post.

  • neilv a day ago

    Llama4 wasn't released... it escaped!

  • bob1029 a day ago

    I assumed the same. There are links here that 404.

  • tarruda a day ago

    Llama.com has the blog post

ilove_banh_mi a day ago

>10M context window

what new uses does this enable?

  • base698 a day ago

    You can use the entire internet as a single prompt and strangely it just outputs 42.

  • sshh12 a day ago

    Video is a big one that's fairly bottlenecked by context length.

  • voidspark a day ago

    Long chats that continue for weeks or months.

  • kilimounjaro a day ago

    You can vibe code microsoft office in a single prompt

RazorDev a day ago

Exciting progress on fine-tuning and instruction-following! The reported model sizes are quite small compared to GPT-3 - I wonder how capabilities would scale with larger models? Also curious about the breakdown of the 40B tokens used for fine-tuning. Overall, great to see more open research in this space.

drilbo a day ago

their huggingface page doesn't actually appear to have been updated yet

  • accrual a day ago

    Hope to see some GGUF quantizations soon!

scosman a day ago

128 exports at 17B active parameters. This is going to be fun to play with!

  • behnamoh a day ago

    does the entire model have to be loaded in VRAM? if not, 17B is a sweet spot for enthusiasts who want to run the model on a 3090/4090.

    • NitpickLawyer a day ago

      Yes. MoE models tipically use a different set of experts at each token. So while the "compute" is similar to a dense model equal to the "active" parameters, the VRAM requirements are larger. You could technically run inference & swap the models around, but the latency would be pretty horrendous.

      • manmal a day ago

        I think prompt processing also needs all the weights.

    • scosman a day ago

      Oh for perf reasons you’ll want it all in vram or unified memory. This isn’t a great local model for 99% of people.

      I’m more interested in playing around with quality given the fairly unique “breadth” play.

      And servers running this should be very fast and cheap.

isawczuk a day ago

Messenger started to get Meta AI assistant, so this is logical next step

  • pests a day ago

    It’s had that for I feel like. Close to a year tho, 6 months at least

asdev a day ago

I don't think open source will be the future of AI models. Self hosting an AI model is much more complex and resource incentive than traditional open source SaaS. Meta will likely have a negative ROI on their AI efforts

  • Centigonal a day ago

    The users of open source software are not limited to individuals. A bank, hedge fund, or intelligence agency might be willing to put forth the effort to self host an AI model versus sending their prompts and RAG context to a third party.

fpgaminer a day ago

https://www.llama.com/ https://www.llama.com/docs/model-cards-and-prompt-formats/ll...

Very exciting. Benchmarks look good, and most importantly it looks like they did a lot of work improving vision performance (based on benchmarks).

The new suggested system prompt makes it seem like the model is less censored, which would be great. The phrasing of the system prompt is ... a little disconcerting in context (Meta's kowtowing to Nazis), but in general I'm a proponent of LLMs doing what users ask them to do.

Once it's on an API I can start throwing my dataset at it to see how it performs in that regard.

  • fpgaminer a day ago

    Alright, played with it a little bit on the API (Maverick). Vision is much better than Llama 3's vision, so they've done good work there. However its vision is not as SOTA as the benchmarks would indicate. Worse than Qwen, maybe floating around Gemini Flash 2.0?

    It seems to be less censored than Llama 3, and can describe NSFW images and interact with them. It did refuse me once, but complied after reminding it of its system prompt. Accuracy of visual NSFW content is not particularly good; much worse than GPT 4o.

    More "sensitive" requests, like asking it to guess the political affiliation of a person from an image, required a _lot_ of coaxing in the system prompt. Otherwise it tends to refuse. Even with their suggested prompt that seemingly would have allowed that.

    More extreme prompts, like asking it to write derogatory things about pictures of real people, took some coaxing as well but was quite straight-forward.

    So yes, I'd say this iteration is less censored. Vision is better, but OpenAI and Qwen still lead the pack.

lousken a day ago

ollama when

  • jovezhong a day ago

    why only llama3.x models are listed on ollama? llama4 no longer wants to support ollama, to better track the adoption?

tomdekan a day ago

So, Quasar == Llama 4 Behemoth?

krashidov a day ago

Anyone know if it can analyze PDFs?

Centigonal a day ago

Really great marketing here, props!

ein0p 13 hours ago

If it's not on Ollama, nobody is going to care beyond perusing the metrics.

andrewstuart a day ago

How much smaller would such a model be if it discarded all information not related to computers or programming?

  • accrual a day ago

    I wonder if there will be a market for "old timey" models one day, ones with a cutoff date of 1800 or similar.

    • intelkishan 15 hours ago

      I guess the lack of training corpus would be a major issue with such a use case.

andrewstuart a day ago

Self hosting LLMs will explode in popularity over next 12 months.

Open models are made much more interesting and exciting and relevant by new generations of AI focused hardware such as the AMD Strix Halo and Apple Mac Studio M3.

GPUs have failed to meet the demands for lower cost and more memory so APUs look like the future for self hosted LLMs.

  • mdp2021 a day ago

    > new generations of AI focused hardware

    Some benchmarks are not encouraging. See e.g. https://www.hardware-corner.net/mac-studio-m3-ultra-deepseek...

    That «AI focused hardware» will either have extremely fast memory, and cost prohibitively, or have reasonable costs, and limits that are to be assessed.

    • andrewstuart a day ago

      Errrr that’s a 671B model.

      • mdp2021 a day ago

        Yes, but what will you need as you will prepare to be set for your personal needs?

        We are far from having reached optimal technology at trivial cost. State-of-the-art commercial VRAM is over 10x faster than the standard one - and costs well over 10x.

        Reasonably available speeds may or may not be acceptable.

  • NitpickLawyer a day ago

    For single user, maybe. But for small teams GPUs are still the only available option, when considering t/s and concurrency. Nvidia's latest 6000pro series are actually reasonably priced for the amount of vram / wattage you get. A 8x box starts at 75k eur and can host up to DS3 / R1 / Llama4 in 8bit with decent speeds, context and concurrency.

steele 19 hours ago

Consuming pirated literature en masse produces a bias away from authoritarianism; consider me flabbergasted.

ianks 14 hours ago

Are we going to find out that Meta pirated libgen again, with zero recognition to the authors?

“Open-sourcing it” doesn’t magically absolve you of the irreparable damages you’ve caused society. You stole their life’s work so your company could profit off of rage-slop.

  • MagicMoonlight 14 hours ago

    The problem is, how do you value one book? £10? Or are we saying £10 every time someone uses the AI?

    Should Taylor swift be liable to pay commission for every piece of music she listened to while training? They will have influenced her work in some way.

    I’d rather go the other way and say that the companies have to freely release their data sets, if the data is derived from other people’s work. It would put everyone on a level playing field.

rvz a day ago

As expected, Meta doesn't disappoint and accelerates the race to zero.

Meta is undervalued.

  • phyrex a day ago

    And it's 50% off right now...

  • brcmthrowaway a day ago

    How does Meta make money from Llama?

    • vessenes a day ago

      It’s an extending innovation for them - makes them more efficient internally, and crucially engages their ad-driven customer base. Giving it away is great, it levels the playing field for competitors on tech while NOT giving them direct access to the billions of users FB has. Plus it makes it less likely that OpenBrainTM will achieve runaway quality internally.

    • phyrex a day ago

      When people do cool stuff they share it on metas platforms, which drives ad impressions

    • manishsharan a day ago

      Have you notice more verbose posts in your feed ? Llama is allowing everyone to sound more knowledgeable than they are. AI based content generation is like an instragram filter for intellect; everyone is pretending to be thoughtful.

    • paxys a day ago

      How does OpenAI make money from AI? The vast majority of the planet isn't paying them $20/month, and it is likely that they will never recover training and inference costs just from subscription fees. Frying GPUs to generate Ghibli images is getting them a negligible amount of added revenue.

      Now think of Meta and their suite of products which already generate $160B+/yr from advertising. Every extra minute they can get a user to spend on Facebook or Instagram, this number goes up. Think about how much money Meta will make if the next viral AI moment happens in their products.

      TL;DR: AI -> engagement -> ads -> revenue.

    • rvz a day ago

      They don't need to directly. They have multiple levers of products to get more money if they wanted to.

      Threads for example is introducing ads and is likely being used to train their Llama models.

      That is only one of many ways that Meta can generate billions again from somewhere else.

ein0p 19 hours ago

Strange choice of languages for their "multilingual" capabilities, but OK. I wonder why there's no Chinese.

EGreg 20 hours ago

Can we somehow load these inside node.js?

What is the easiest way to load them remotely? Huggingface Spaces? Google AI Studio?

I am teaching a course on AI to non-technical students, and I wanted the students to have a minimal setup: which in this case would be:

1) Browser with JS (simple folder of HTML, CSS) and Tensorflow.js that can run models like Blazeface for face recognition, eye tracking etc. (available since 2019)

2) Node.js with everything baked in (javascript) and use a CDN like CloudFront with tunnel to serve it to the web

3) So if they download models to their computer, how would they run them? Is it possible to run the smallest LLaMa locally? Or any GGUF models in JS? Or they have to have Python and PyTorch?

PS: Here is what the class looks like: https://vimeo.com/1060576298/c5693047e0?share=copy

  • tucnak 13 hours ago

    You're not qualified to teach a course on AI if you're asking questions like that. Please don't scam students, they're naive and don't know better and you're predating.

    • EGreg 2 hours ago

      I didn’t seek this out. I was asked to teach this course by the directors of the program that the students paid for. The students want me to teach this. I have been upfront from day 1 with everybody.

      Oh trust me, I am very upfront about what I know and do not know. My main background is in developing full stack web sites, apps, and using APIs. I have been using AI models since 2019, using Tensorflow.js in the browser and using APIs for years. I am not in the Python ecosystem, though, I don’t go deep into ML and don’t pretend to. I don’t spend my days with PyTorch, CUDA or fine-tuning models or running my own infrastructure.

      Your comment sounds like “you don’t know cryptographyc if you have to ask basic questions about quantum-resistant SPHICS+ or bilinear pairings, do not teach a class on how to succeed in business using blockchain and crypto, you’re scamming people.”

      Or in 2014: “if you don’t know how QUIC and HTTP/2 works and Web Push and WebRTC signaling, and the latest Angular/React/Vue/Svelte/… you aren’t qualified to teach business school students how to make money with web technology”.

      It’s the classic engineering geek argument. But most people can make money without knowing the ins and outs of every single technology, every single framework. It is much more valuable to see what works and how to use it. Especially when the space changes week to week as I teach it. The stuff I teach in the beginning of the course (eg RAG) may be obsolete by the time the latest 10-million token model drops.

      I did found an AI startup a few years ago and was one of the first to use OpenAI’s completions API to build bots for forums etc. I also work to connect deep tech to AI, to augment it: https://engageusers.ai/ecosystem.pdf

      And besides — every time I start getting deep into how the models work, including RoPe and self—attention and transformer architecture, their eyes glaze over. They barely know the difference between a linear function wnd an activation function. At best I am giving these non-technical business students three things:

      1) an intuition about how the models are trained, do inference and how jobs are submitted, to take the magic out of it. I showed them everything from LLMs to Diffusion models and GANs, but I keep emphasizing that the techniques are improving

      2) how to USE the latest tools like bolt.new or lovable or opusclip etc.

      3) do hands-on group projects to simulate working on a team and building a stack, that’s how I grade them. And for this I wanted to MINIMIZE what they need to install. LLaMa 4 for one GPU is the ticket!

      Yeah so I was hoping the JS support was more robust, and asking HN if they knew of any ports (at least to WASM). But no, it’s firmly locked into PyTorch and CUDA for now. So I’m just gonna stick with Tensorflow for educational purposes, like people used Pascal or Ruby when teaching. I want to let them actually install ONE thing (node.js) and be able to run inferenfe in their browser. I want them to be able to USE the tools and build websites and businesses end-to-end, launch a business and have agents work for them.

      Some of the places they engage the most is when I talk about AI and society, sustainability or regulations. That’s the cohort

      But you can keep geeking out on low-level primitives. I remember writing my own 3D-persoective-correct-texturemapping engine and then GPUs came out. Carmack and others kept at it for a while, others moved on. You could make a lot of money in 3D games without knowing how texturemapping and lighting worked, and same goes for this.

      PS: No thanks to you but I found what I was looking for myself in a few minutes. https://youtu.be/6LHNbeDADA4?si=LCM2E48hVxmO6VG4 https://github.com/Picovoice/picollm PicoLLM is a way to run LLaMa 3 on Node, it will be great for my students. I bet you didn’t know much about Node.js ecosystem for LLMs because it’s very nascent.

DeepYogurt a day ago

Jesus. How much ram does the big one take to run?

  • ZiiS 13 hours ago

    Wouldn't fill a NVIDIA DGX B300 node.

yapyap a day ago

is this the quasar LLM from openrouter?

  • alchemist1e9 a day ago

    That one claims to be from OpenAI when asked, however that could easily be hallucination from being feed lots of OpenAI generated synthetic training data.

    Would be really crazy if it is quasar LLM.

RandyOrion 20 hours ago

I guess I have to say thank you Meta?

A somewhat sad rant below.

Deepseek starts a toxic trend of providing super, super large MoE. And MoE is famous for being parameter-inefficient, which is unfriendly to normal consumer hardware with limited vram.

The super large size of LLM also disables nearly every people from doing meaningful development on these models. R1-1776 is the only fine-tune variation of R1 that makes some noise, and it's by a corp not some random individual.

In this release, the smallest Llama 4 model is over 100B, which is not small by any means, and will prevent people from fine-tuning as well.

On top of that, to access llama models on hugging face has become notoriously hard because of 'permission' issues. See details in https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/dis...

Yeah, I personally don't really see the point of releasing large MoEs. I'll stick to small and dense LLMs from Qwen, Mistral, Microsoft, Google and others.

Edit: This comment got downvoted, too. Please explain your reason before doing that.

rfoo a day ago

From model cards, suggested system prompt:

> You are Llama 4. Your knowledge cutoff date is August 2024. You speak Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. Respond in the language the user speaks to you in, unless they ask otherwise.

It's interesting that there's no single one of CJK languages mentioned. I'm tempted to call this a racist model even.

  • accrual a day ago

    Isn't there a vast quantity of relevant information in CJK languages? I remember reading some models even "think" in other languages where there might be more detail before outputting in the target language.

    • voidspark a day ago

      The model wasn't trained on those languages (yet). The only possible explanation is racism. The model is also racist against Russians and Icelanders.

      • dragonwriter 18 hours ago

        > The model wasn't trained on those languages (yet).

        It probably has been trained on them (it was trained on 40 trillion tokens covering 200 languages, they almost certainly didn't avoid CJK languages.

        They only have been further fine-tuned on a set of 12 languages. (I wonder if that is the set the base Behemoth model both are distilled from had been trained on when they were distilled; Behemoth is apparently not completely finished, and perhaps there will be further revisions of the distilled models as it is.)

  • Philpax a day ago

    That is a very strange omission...