47 comments

  • simonw 40 minutes ago
    I've been running this on my laptop with the Unsloth 20.9GB GGUF in LM Studio: https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF/blob/mai...

    It drew a better pelican riding a bicycle than Opus 4.7 did! https://simonwillison.net/2026/Apr/16/qwen-beats-opus/

    • jubilanti 2 minutes ago
      I wonder when pelican riding a bicycle will be useless as an evaluation task. The point was that it was something weird nobody had ever really thought about before, not in the benchmarks or even something a team would run internally. But now I'd bet internally this is one of the new Shirley Cards.
    • cyclopeanutopia 19 minutes ago
      But that you also gave a win to Qwen on flamingo is pretty outrageous! :)

      Tthe right one looks much better, plus adding sunglasses without prompting is not that great. Hopefully it won't add some backdoor to the generated code without asking. ;)

    • bertili 25 minutes ago
      It's fascinating that a $999 Mac Mini (M4 32GB) with almost similar wattage as a human brain gets us this far.
    • jamwise 37 minutes ago
      I've had some really gnarly SVGs from Claude. Here's what I got after many iterations trying to draw a hand: https://imgur.com/a/X4Jqius
    • danielhanchen 28 minutes ago
      Oh that is pretty good! And the SVG one!
    • slekker 30 minutes ago
      How does it do with the "car wash" benchmark? :D
  • bertili 4 hours ago
    A relief to see the Qwen team still publishing open weights, after the kneecapping [1] and departures of Junyang Lin and others [2]!

    [1] https://news.ycombinator.com/item?id=47246746 [2] https://news.ycombinator.com/item?id=47249343

    • zozbot234 3 hours ago
      This is just one model in the Qwen 3.6 series. They will most likely release the other small sizes (not much sense in keeping them proprietary) and perhaps their 122A10B size also, but the flagship 397A17B size seems to have been excluded.
      • kylehotchkiss 1 hour ago
        How many people/hackernews can run a 397b param model at home? Probably like 20-30.
        • jubilanti 26 minutes ago
          You can rent a cloud H200 with 140GB VRAM in a server with 256GB system ram for $3-4/hr.
        • kridsdale3 36 minutes ago
          I can (barely, but sustainably) run Q3.5 397B on my Mac Studio with 256GB unified. It cost $10,000 but that's well within reach for most people who are here, I expect.
          • SlavikCA 29 minutes ago
            I'm running it on my Intel Xeon W5 with 256GB of DDR5 and Nvidia 72GB VRAM. Paid $7-8k for this system. Probably cost twice as much now.

            Using UD-IQ4_NL quants.

            Getting 13 t/s. Using it with thinking disabled.

          • qlm 28 minutes ago
            Hacker News moment
          • toxik 34 minutes ago
            $10k is well outside my budget for frivolous computer purchases.
            • bdangubic 12 minutes ago
              99.97% of HN users are nodding… :)
          • rwmj 13 minutes ago
            For some reason you were being downvoted but I enjoy hearing how people are running open weights models at home (NOT in the cloud), and what kind of hardware they need, even if it's out of my price range.
        • r-w 1 hour ago
          OpenRouter.
          • mistercheese 30 minutes ago
            Yeah I think there’s benefits to third-party providers being able to run the large models and have stronger guarantees about ZDR and knowing where they are hosted! So Open Weights for even the large models we can’t personally serve on our laptops is still useful.
          • parsimo2010 47 minutes ago
            If you're running it from OpenRouter, you might as well use Qwen3.6 Plus. You don't need to be picky about a particular model size of 3.6. If you just want the 397b version to save money, just pick a cheaper model like M2.7.
        • stavros 43 minutes ago
          It doesn't matter how many can run it now, it's about freedom. Having a large open weights model available allows you to do things you can't do with closed models.
      • stingraycharles 3 hours ago
        397A17B = 397B total weights, 17B per expert?
        • zackangelo 3 hours ago
          17b per token. So when you’re generating a single stream of text (“decoding”) 17b parameters are active.

          If you’re decoding multiple streams, it will be 17b per stream (some tokens will use the same expert, so there is some overlap).

          When the model is ingesting the prompt (“prefilling”) it’s looking at many tokens at once, so the number of active parameters will be larger.

        • wongarsu 3 hours ago
          397B params, 17B activated at the same time

          Those 17B might be split among multiple experts that are activated simultaneously

        • littlestymaar 3 hours ago
          That's not how it works. Many people get confused by the “expert” naming, when in reality the key part of the original name “sparse mixture of experts” is sparse.

          Experts are just chunks of each layers MLP that are only partially activated by each token, there are thousands of “experts” in such a model (for Qwen3-30BA3, it was 48 layers x 128 “experts” per layer with only 8 active at each token)

      • bertili 3 hours ago
        Is there any source for these claims?
    • guitcastro 4 hours ago
      I really wish they released qwen-image 2.0 as open weights.
  • homebrewer 4 hours ago
    Already quantized/converted into a sane format by Unsloth:

    https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF

    • Aurornis 2 hours ago
      Unsloth is great for uploading quants quickly to experiment with, but everyone should know that they almost always revise their quants after testing.

      If you download the release day quants with a tool that doesn’t automatically check HF for new versions you should check back again in a week to look for updated versions.

      Some times the launch day quantizations have major problems which leads to early adopters dismissing useful models. You have to wait for everyone to test and fix bugs before giving a model a real evaluation.

      • danielhanchen 2 hours ago
        We re-uploaded Gemma4 4 times - 3 times were due to 20 llama.cpp bug fixes, which we helped solve some as well. The 4th is an official Gemma chat template improvement from Google themselves, so these are out of our hands. All providers had to re-fix their uploads, so not just us.

        For MiniMax 2.7 - there were NaNs, but it wasn't just ours - all quant providers had it - we identified 38% of bartowski's had NaNs. Ours was 22%. We identified a fix, and have already fixed ours see https://www.reddit.com/r/LocalLLaMA/comments/1slk4di/minimax.... Bartowski has not, but is working on it. We share our investigations always.

        For Qwen3.5 - we shared our 7TB research artifacts showing which layers not to quantize - all provider's quants were not optimal, not broken - ssm_out and ssm_* tensors were the issue - we're now the best in terms of KLD and disk space - see https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwe...

        On other fixes, we also fixed bugs in many OSS models like Gemma 1, Gemma 3, Llama chat template fixes, Mistral, and many more.

        It might seem these issues are due to us, but it's because we publicize them and tell people to update. 95% of them are not related to us, but as good open source stewards, we should update everyone.

        • evilduck 1 hour ago
          I just wanted to express gratitude to you guys, you do great work. However, it is a little annoying to have to redownload big models though and keeping up with the AI news and community sentiment is a full time job. I wish there was some mechanism somewhere (on your site or Huggingface or something) for displaying feedback or confidence in a model being "ready for general use" before kicking off 100+ GB model downloads.
          • danielhanchen 1 hour ago
            Hey thanks - yes agreed - for now we do:

            1. Split metadata into shard 0 for huge models so 10B is for chat template fixes - however sometimes fixes cause a recalculation of the imatrix, which means all quants have to be re-made

            2. Add HF discussion posts on each model talking about what changed, and on our Reddit and Twitter

            3. Hugging Face XET now has de-duplication downloading of shards, so generally redownloading 100GB models again should be much faster - it chunks 100GB into small chunks and hashes them, and only downloads the shards which have changed

          • CamperBob2 36 minutes ago
            Best policy is to just wait a couple of weeks after a major model is released. It's frustrating to have to re-download tens or hundreds of GB every few days, but the quant producers have no choice but to release early and often if they want to maintain their reputation.

            Ideally the labs releasing the open models would work with Unsloth and the llama.cpp maintainers in advance to work out the bugs up front. That does sometimes happen, but not always.

            • danielhanchen 14 minutes ago
              Yep agreed at least 1 week is a good idea :)

              We do get early access to nearly all models, and we do find the most pressing issues sometimes. But sadly some issues are really hard to find and diagnose :(

        • sowbug 2 hours ago
          Please publish sha256sums of the merged GGUFs in the model descriptions. Otherwise it's hard to tell if the version we have is the latest.
        • dist-epoch 2 hours ago
          What do you think about creating a tool which can just patch the template embedded in the .gguf file instead of forcing a re-download? The whole file hash can be checked afterwards.
          • danielhanchen 1 hour ago
            Sadly it's not always chat template fixes :( But yes we now split the first shard as pure metadata (10MB) for huge models - these include the chat template etc - so you only need to download that.

            For serious fixes, sadly we have to re-compute imatrix since the activation patterns have changed - this sadly makes the entire quant change a lot, hence you have to re-download :(

      • embedding-shape 2 hours ago
        Not to mention that almost every model release has some (at least) minor issue in the prompt template and/or the runtime itself, so even if they (not talking unsloth specifically, in general) claim "Day 0 support", do pay extra attention to actual quality as it takes a week or two before issues been hammered out.
        • danielhanchen 2 hours ago
          Yes this is fair - we try our best to communicate issues - I think we're mostly the only ones doing the communication that model A or B has been fixed etc.

          We try our best as model distributors to fix them on day 0 or 1, but 95% of issues aren't our issues - as you mentioned it's the chat template or runtime etc

      • fuddle 1 hour ago
        I don't understand why the open source model providers don't also publish the quantized version?
        • danielhanchen 1 hour ago
          They sometimes do! Qwen, Google etc do them!
    • torginus 30 minutes ago
      Why doesn't Qwen itself release the quantized model? My impression is that quantization is a highly nontrivial process that can degrade the model in non-obvious ways, thus its best handled by people who actually built the model, otherwise the results might be disappointing.

      Users of the quantized model might be even made to think that the model sucks because the quantized version does.

    • sander1095 2 hours ago
      I sense that I don't really understand enough of your comment to know why this is important. I hope you can explain some things to me:

      - Why is Qwen's default "quantization" setup "bad" - Who is Unsloth? - Why is his format better? What gains does a better format give? What are the downsides of a bad format? - What is quantization? Granted, I can look up this myself, but I thought I'd ask for the full picture for other readers.

      • danielhanchen 2 hours ago
        Oh hey - we're actually the 4th largest distributor of OSS AI models in GB downloads - see https://huggingface.co/unsloth

        https://unsloth.ai/docs/basics/unsloth-dynamic-2.0-ggufs is what might be helpful. You might have heard 1bit dynamic DeepSeek quants (we did that) - not all layers can be 1bit - important ones are in 8bit or 16bit, and we show it still works well.

      • dist-epoch 1 hour ago
        The default Qwen "quantization" is not "bad", it's "large".

        Unsloth releases lower-quality versions of the model (Qwen in this case). Think about taking a 95% quality JPEG and converting it to a 40% quality JPEG.

        Models are quantized to lower quality/size so they can run on cheaper/consumer GPUs.

      • est 1 hour ago
        hey you can do a bit research yourself and tell your results to us!
    • palmotea 3 hours ago
      How much VRAM does it need? I haven't run a local model yet, but I did recently pick up a 16GB GPU, before they were discontinued.
      • WithinReason 3 hours ago
        It's on the page:

          Precision  Quantization Tag File Size
          1-bit      UD-IQ1_M         10 GB
          2-bit      UD-IQ2_XXS       10.8 GB
                     UD-Q2_K_XL       12.3 GB
          3-bit      UD-IQ3_XXS       13.2 GB
                     UD-Q3_K_XL       16.8 GB
          4-bit      UD-IQ4_XS        17.7 GB
                     UD-Q4_K_XL       22.4 GB
          5-bit      UD-Q5_K_XL       26.6 GB
          16-bit     BF16             69.4 GB
        • Aurornis 2 hours ago
          Additional VRAM is needed for context.

          This model is a MoE model with only 3B active parameters per expert which works well with partial CPU offload. So in practice you can run the -A(N)B models on systems that have a little less VRAM than you need. The more you offload to the CPU the slower it becomes though.

          • Glemllksdf 2 hours ago
            Isn't that some kind of gambling if you offload random experts onto the CPU?

            Or is it only layers but that would affect all Experts?

            • dragonwriter 1 hour ago
              Pretty sure all partial offload systems I’ve seen work by layers, but there might be something else out there.
        • est 1 hour ago
          I really want to know what does M, K, XL XS mean in this context and how to choose.

          I searched all unsloth doc and there seems no explaination at all.

        • JKCalhoun 1 hour ago
          "16-bit BF16 69.4 GB"

          Is that (BF16) a 16-bit float?

          • mtklein 1 hour ago
            Yes, it's a "Brain float", basically an ordinary 32-bit float with the low 16 mantissa bits cut off. Exact same range as fp32, lower precision, and not the same as the other fp16, which has less exponent and more mantissa.
          • Gracana 1 hour ago
            https://en.wikipedia.org/wiki/Bfloat16_floating-point_format

            Yes, however it’s a different format from standard fp16, it trades precision for greater dynamic range.

          • WithinReason 1 hour ago
            yes, it has 8 exponent bits like float32 instead of 6 like float16
        • palmotea 3 hours ago
          Thanks! I'd scanned the main content but I'd been blind to the sidebar on the far right.
      • tommy_axle 2 hours ago
        Pick a decent quant (4-6KM) then use llama-fit-params and try it yourself to see if it's giving you what you need.
      • zozbot234 3 hours ago
        Should run just fine with CPU-MoE and mmap, but inference might be a bit slow if you have little RAM.
      • Ladioss 2 hours ago
        You can run 25-30b model easily if you use Q3 or Q4 quants and llama-server with a pretty long list of options.
      • trvz 3 hours ago
        If you have to ask then your GPU is too small.

        With 16 GB you'll be only able to run a very compressed variant with noticable quality loss.

        • coder543 3 hours ago
          Not true. With a MoE, you can offload quite a bit of the model to CPU without losing a ton of performance. 16GB should be fine to run the 4-bit (or larger) model at speeds that are decent. The --n-cpu-moe parameter is the key one on llama-server, if you're not just using -fit on.
        • palmotea 3 hours ago
          > If you have to ask then your GPU is too small.

          What's the minimum memory you need to run a decent model? Is it pretty much only doable by people running Macs with unified memory?

          • giobox 3 hours ago
            It's worth noting now there are other machines than just Apple that combine a powerful SoC with a large pool of unified memory for local AI use:

            > https://www.dell.com/en-us/shop/cty/pdp/spd/dell-pro-max-fcm...

            > https://marketplace.nvidia.com/en-us/enterprise/personal-ai-...

            > https://frame.work/products/desktop-diy-amd-aimax300/configu...

            etc.

            But yes, a modern SoC-style system with large unified memory pool is still one of the best ways to do it.

          • TechSquidTV 3 hours ago
            My Mac Studio with 96GB of RAM is maybe just at the low end of passable. It's actually extremely good for local image generation. I could somewhat replace something like Nano Banana comfortably on my machine.

            But I don't need Nano Banana very much, I need code. While it can, there's no way I would ever opt to use a local model on my machine for code. It makes so much more sense to spend $100 on Codex, it's genuinely not worth discussing.

            For non-thinking tasks, it would be a bit slower, but a viable alternative for sure.

            • slopinthebag 1 hour ago
              You just need to adjust your workflow to use the smaller models for coding. It's primarily just a case of holding them wrong if you end up with worse outputs.
          • jchw 3 hours ago
            32 GiB of VRAM is possible to acquire for less than $1000 if you go for the Arc Pro B70. I have two of them. The tokens/sec is nowhere near AMD or NVIDIA high end, but its unexpectedly kind of decent to use. (I probably need to figure out vLLM though as it doesn't seem like llama.cpp is able to do them justice even seemingly with split mode = row. But still, 30t/s on Gemma 4 (on 26B MoE, not dense) is pretty usable, and you can do fit a full 256k context.)

            When I get home today I totally look forward to trying the unsloth variants of this out (assuming I can get it working in anything.) I expect due to the limited active parameter count it should perform very well. It's obviously going to be a long time before you can run current frontier quality models at home for less than the price of a car, but it does seem like it is bound to happen. (As long as we don't allow general purpose computers to die or become inaccessible. Surely...)

            • zozbot234 3 hours ago
              New versions of llama.cpp have experimental split-tensor parallelism, but it really only helps with slow compute and a very fast interconnect, which doesn't describe many consumer-grade systems. For most users, pipeline parallelism will be their best bet for making use of multi-GPU setups.
              • jchw 2 hours ago
                Yeah, I was doing split tensor and it seemed like a wash. The Arc B70s are not huge on compute.

                Right now I'm only able to run them in PCI-e 5.0 x8 which might not be sufficient. But, a cheap older Xeon or TR seems silly since PCI-e 4.0 x16 isn't theoretically more bandwidth than PCI-e 5.0 x8. So it seems like if that is really still bottlenecked, I'll just have to bite the bullet and set up a modern HEDT build. With RAM prices... I am not sure there is a world where it could ever be worth it. At that point, seems like you may as well go for an obscenely priced NVIDIA or AMD datacenter card instead and retrofit it with consumer friendly thermal solutions. So... I'm definitely a bit conflicted.

                I do like the Arc Pro B70 so far. Its not a performance monster, but it's quiet and relatively low power, and I haven't run into any instability. (The AMDGPU drivers have made amazing strides, but... The stability is not legendary. :)

                I'll have to do a bit of analysis and make sure there really is an interconnect bottleneck first, versus a PEBKAC. Could be dropping more lanes than expected for one reason or another too.

                • zozbot234 2 hours ago
                  You could fit your HEDT with minimum RAM and a combination of Optane storage (for swapping system RAM with minimum wear) and fast NAND (for offloading large read-only data). If you have abundant physical PCIe slots it ought to be feasible.
            • dist-epoch 1 hour ago
              NVIDIA 5070 Ti can run Gemma 4 26B at 4-bit at 120 tk/s.

              Arc Pro B70 seems unexpectedely slow? Or are you using 8-bit/16-bit quants.

              • jchw 1 hour ago
                Unfortunately it really is running this slow with Llama.cpp, but of course that's with Vulkan mode. The VRAM capacity is definitely where it shines, rather than compute power. I am pretty sure that this isn't really optimal use of the cards, especially since I believe we should be able to get decent, if still sublinear, scaling with multiple cards. I am not really a machine learning expert, I'm curious to see if I can manage to trace down some performance issues. (I've already seen a couple issues get squashed since I first started testing this.)

                I've heard that vLLM performs much better, scaling particularly better in the multi GPU case. The 4x B70 setup may actually be decent for the money given that, but probably worth waiting on it to see how the situation progresses rather than buying on a promise of potential.

                A cursory Google search does seem to indicate that in my particular case interconnect bandwidth shouldn't actually be a constraint, so I doubt tensor level parallelism is working as expected.

          • bfivyvysj 3 hours ago
            A bit like asking how long is a piece of string.
            • latentsea 3 hours ago
              It's twice as long as from one end to the middle.
            • palmotea 3 hours ago
              More like "about how long of a string do I need to run between two houses in the densest residential neighborhood of single-family homes in the US?"
          • layer8 3 hours ago
            It’s also doable with AMD Strix Halo.
          • angoragoats 3 hours ago
            Macs with unified memory are economical in terms of $/GB of video memory, and they match an optimized/home built GPU setup in efficiency (W/token), but they are slow in terms of absolute performance.

            With this model, since the number of active parameters is low, I would think that you would be fine running it on your 16GB card, as long as you have, say 32GB of regular system memory. Temper your expectations about speed with this setup, as your system memory and CPU are multiple times slower than the GPU, so when layers spill over you will slow down.

            To avoid this, there's no need to buy a Mac -- a second 16GB GPU would do the trick just fine, and the combined dual GPU setup will likely be faster than a cheap mac like a Mac mini. Pay attention to your PCIe slots, but as long as you have at least an x4 slot for the second GPU, you'll be fine (LLM inference doesn't need x8 or x16).

          • utilize1808 3 hours ago
            Obviously going to depend on your definition of "decent". My impression so far is that you will need between 90GB to 100GB of memory to run medium sized (31B dense or ~110B MoE) models with some quantization enabled.
            • cjbgkagh 3 hours ago
              I’m running Gemma4 31B (Q8) on my 2 4090s (48GB) with no problem.
              • Glemllksdf 2 hours ago
                I have the same setup but tried paperclip ai with it and it seems to me that either i'm unable to setup it properly or multiply agents struggle with this setup. Especially as it seems that paperclip ai and opencode (used for connection) is blowing up the context to 20-30k

                Any tips around your setup running this?

                I use lmstudio with default settings and prioritization instead of split.

                • cjbgkagh 1 hour ago
                  I asked AI for help setting it up. I use 128k context for 31B and 256k context for 26B4A. Ollama worked out of the box for me but I wanted more control with llama.cpp.

                  My command for llama-server:

                  llama-server -m /models/gemma-4-26B-A4B-it-UD-Q8_K_XL.gguf -ngl 99 -sm layer -ts 10,12 --jinja --flash-attn on --cont-batching -np 1 -c 262144 -b 4096 -ub 512 -ctk q8_0 -ctv q8_0 --host 0.0.0.0 --port 8080 --timeout 18000

          • littlestymaar 3 hours ago
            No, GP is excessively restrictive. Llama.cpp supports RAM offloading out of the box.

            It's going to be slower than if you put everything on your GPU but it would work.

            And if it's too slow for your taste you can try the quantized version (some Q3 variant should fit) and see how well it works for you.

        • FusionX 3 hours ago
          Aren't 4bits model decent? Since, this is an MOE model, I'm assuming it should have respectable tk/s, similar to previous MOE models.
    • txtsd 3 hours ago
      So I can use this in claude code with `ollama run claude`?
      • Ladioss 2 hours ago
        More like `ollama launch claude --model qwen3.6:latest`

        Also you need to check your context size, Ollama default to 4K if <24 Gb of VRAM and you need 64K minimum if you want claude to be able to at least lift a finger.

        • Patrick_Devine 31 minutes ago
          If you're on a Mac, use the MLX backend versions which are considerably faster than the GGML based versions (including llama.cpp) and you don't need to fiddle with the context size. The models are `qwen3.6:35b-a3b-nvfp4`, `qwen3.6:35b-a3b-mxfp8`, and `qwen3.6:35b-a3b-mlx-bf16`.
      • nunodonato 1 hour ago
      • pj_mukh 3 hours ago
        have you found a model that does this with usable speeds on an M2/M3?
        • postalcoder 3 hours ago
          On a M4 MBP ollama's qwen3.5:35b-a3b-coding-nvfp4 runs incredibly fast when in the claude/codex harness. M2/M3 should be similar.

          It's incomparably faster than any other model (i.e. it's actually usable without cope). Caching makes a huge difference.

    • terataiijo 3 hours ago
      lmao they are so fast yooo
      • ttul 3 hours ago
        Yes. How do they do it? Literally they must have PagerDuty set up to alert the team the second one of the labs releases anything.
        • beernet 3 hours ago
          They obviously collaborate with some of the labs prior to the official release date.
          • sigbottle 3 hours ago
            That... is a more plausible explanation I didn't think of.
            • danielhanchen 3 hours ago
              Yes we collab with them!
              • qskousen 6 minutes ago
                Sorry this is a bit of a tangent, but I noticed you also released UD quants of ERNIE-Image the same day it released, which as I understand requires generating a bunch of images. I've been working to do something similar with my CLI program ggufy, and was curious of you had any info you could share on the kind of compute you put into that, and if you generate full images or look at latents?
        • sigbottle 3 hours ago
          Is quantization a mostly solved pipeline at this point? I thought that architectures were varied and weird enough where you can't just click a button, say "go optimize these weights", and go. I mean new models have new code that they want to operate on, right, so you'd have to analyze the code and insert the quantization at the right places, automatically, then make sure that doesn't degrade perf?

          Maybe I just don't understand how quantization works, but I thought quantization was a very nasty problem involving a lot of plumbing

      • bildung 3 hours ago
        Bad QA :/ They had a bunch of broken quantizations in the last releases
        • danielhanchen 3 hours ago
          1. Gemma-4 we re-uploaded 4 times - 3 times were 10-20 llama.cpp bug fixes - we had to notify people to upload the correct ones. The 4th is an official Gemma chat template improvement from Google themselves.

          2. Qwen3.5 - we shared our 7TB research artifacts showing which layers not to quantize - all provider's quants were under optimized, not broken - ssm_out and ssm_* tensors were the issue - we're now the best in terms of KLD and disk space

          3. MiniMax 2.7 - we swiftly fixed it due to NaN PPL - we found the issue in all quants regardless of provider - so it affected everyone not just us. We wrote a post on it, and fixed it - others have taken our fix and fixed their quants, whilst some haven't updated.

          Note we also fixed bugs in many OSS models like Gemma 1, Gemma 3, Llama chat template fixes, Mistral, and many more.

          Unfortunately sometimes quants break, but we fix them quickly, and 95% of times these are out of our hand.

          We swiftly and quickly fix them, and write up blogs on what happened. Other providers simply just take our blogs and fixes and re-apply, re-use our fixes.

          • rohansood15 2 hours ago
            Thanks for all the amazing work Daniel. I remember you guys being late to OH because you were working on weights released the night before - and it's great to see you guys keep up the speed!
            • danielhanchen 2 hours ago
              Oh thanks haha :) We try our best to get model releases out the door! :) Hope you're doing great!
          • bildung 2 hours ago
            Fair enough, appreciate the detailed response! Can you elaborate why other quantizations weren't affected (e.g. bartowski)? Simply because they were straight Q4 etc. for every layer?
      • ekianjo 3 hours ago
        yeah and often their quants are broken. They had to update their Gemma4 quants like 4 times in the past 2 weeks.
        • danielhanchen 3 hours ago
          No it's not our fault - re our 4 uploads - the first 3 are due to llama.cpp fixing bugs - this was out of our control (we're llama.cpp contributors, but not the main devs) - we could have waited, but it's best to update when multiple (10-20) bugs are fixed.

          The 4th is Google themselves improving the chat template for tool calling for Gemma.

          https://github.com/ggml-org/llama.cpp/issues/21255 was another issue CUDA 13.2 was broken - this was NVIDIA's CUDA compiler itself breaking - fully out of our hands - but we provided a solution for it.

  • mtct88 4 hours ago
    Nice release from the Qwen team.

    Small openweight coding models are, imho, the way to go for custom agents tailored to the specific needs of dev shops that are restricted from accessing public models.

    I'm thinking about banking and healthcare sector development agencies, for example.

    It's a shame this remains a market largely overlooked by Western players, Mistral being the only one moving in that direction.

    • lelanthran 3 hours ago
      > It's a shame this remains a market largely overlooked by Western players, Mistral being the only one moving in that direction.

      I've said in a recent comment that Mistral is the only one of the current players who appear to be moving towards a sustainable business - all the other AI companies are simply looking for a big payday, not to operate sustainably.

    • NitpickLawyer 4 hours ago
      I agree with the sentiment, but these models aren't suited for that. You can run much bigger models on prem with ~100k of hardware, and those can actually be useful in real-world tasks. These small models are fun to play with, but are nowhere close to solving the needs of a dev shop working in healthcare or banking, sadly.
    • Aurornis 2 hours ago
      I play with the small open weight models and I disagree. They are fun, but they are not in the same class as hosted models running on big hardware.

      If some organization forbade external models they should invest in the hardware to run bigger open models. The small models are a waste of time for serious work when there are more capable models available.

    • kennethops 4 hours ago
      I love the idea of building competitor to open weight models but damn is this an expensive game to play
    • smrtinsert 3 hours ago
      How true is this? How does a regulated industry confirm the model itself wasn't trained with malicious intent?
      • ndriscoll 3 hours ago
        Why would it matter if the model is trained with malicious intent? It's a pure function. The harness controls security policies.
  • alecco 2 hours ago
    Related interesting find on Qwen.

    "Qwen's base models live in a very exam-heavy basin - distinct from other base models like llama/gemma. Shown below are the embeddings from randomly sampled rollouts from ambiguous initial words like "The" and "A":"

    https://xcancel.com/N8Programs/status/2044408755790508113

  • armanj 4 hours ago
    I recall a Qwen exec posted a public poll on Twitter, asking which model from Qwen3.6 you want to see open-sourced; and the 27b variant was by far the most popular choice. Not sure why they ignored it lol.
    • zozbot234 3 hours ago
      The 27B model is dense. Releasing a dense model first would be terrible marketing, whereas 35A3B is a lot smarter and more quick-witted by comparison!
      • arxell 3 hours ago
        Each has it's pros and cons. Dense models of equivalent total size obviously do run slower if all else is equal, however, the fact is that 35A3B is absolutely not 'a lot smarter'... in fact, if you set aside the slower inference rates, Qwen3.5 27B is arguably more intelligent and reliable. I use both regularly on a Strix Halo system... the Just see the comparison table here: https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF . The problem that you have to acknowledge if running locally (especially for coding tasks) is that your primary bottleneck quickly becomes prompt processing (NOT token generation) and here the differences between dense and MOE are variable and usually negligible.
        • nunodonato 1 hour ago
          I was hoping this would be the model to replace our Qwen3.5-27B, but the difference is marginally small. Too risky, I'll pass and wait for the release of a dense version.
        • Mikealcl 1 hour ago
          Could you explain why prompt processing is the bottle neck please? I've seen this behavior but I don't understand why.
          • zozbot234 1 hour ago
            You should be able to save a lot on prefill by stashing KV-cache shared prefixes (since KV-cache for plain transformers is an append-only structure) to near-line bulk storage and fetching them in as needed. Not sure why local AI engines don't do this already since it's a natural extension of session save/restore and what's usually called prompt caching.
      • JKCalhoun 1 hour ago
        "…whereas 35A3B is a lot smarter…"

        Must. Parse. Is this a 35 billion parameter model that needs only 3 billion parameters to be active? (Trying to keep up with this stuff.)

        EDIT: A later comment seems to clarify:

        "It's a MoE model and the A3B stands for 3 Billion active parameters…"

      • Miraste 3 hours ago
        What? 35B-A3B is not nearly as smart as 27B.
        • ekianjo 3 hours ago
          yeah the 27B feels like something completely different. If you use it on long context tasks it performs WAY better than 35b-a3b
          • Der_Einzige 2 hours ago
            I've been telling analysts/investors for a long time that dense architectures aren't "worse" than sparse MoEs and to continue to anticipate the see-saw of releases on those two sub-architectures. Glad to continuously be vindicated on this one.

            For those who don't believe me. Go take a look at the logprobs of a MoE model and a dense model and let me know if you can notice anything. Researchers sure did.

        • zkmon 3 hours ago
          Yes.
    • arunkant 3 hours ago
      Probably coming next
    • zkmon 3 hours ago
      I'm guessing 3.5-27b would beat 3.6-35b. MoE is a bad idea. Because for the same VRAM 27b would leave a lot more room, and the quality of work directly depends on context size, not just the "B" number.
      • zozbot234 3 hours ago
        MoE is not a bad idea for local inference if you have fast storage to offload to, and this is quickly becoming feasible with PCIe 5.0 interconnect.
      • perbu 1 hour ago
        MoE is excellent for the unified memory inference hardware like DGX Sparc, Apple Studio, etc. Large memory size means you can have quite a few B's and the smaller experts keeps those tokens flowing fast.
  • cyrialize 24 minutes ago
    My last laptop was a used 2012 T530.

    My current is a used M1 MBP Pro with 16GB of ram.

    I thought this was all I was ever going to need, but wanting to run really nice models locally has me thinking about upgrading.

    Although, part of me wants to see how far I could get with my trusty laptop.

    • bigyabai 20 minutes ago
      Your current laptop is still a fine thin client. Unless you program in the woods, it's probably cheapest to build a home inference box and route it over Tailscale or something.
  • KronisLV 50 minutes ago
    I wonder how this one compares to Qwen3 Coder Next (the 80B A3B model), since you'd think that even though it's older, it having more parameters would make it more useful for agentic and development use cases: https://huggingface.co/collections/Qwen/qwen3-coder-next
  • seemaze 3 hours ago
    Fingers crossed for mid and larger models as well. I'd personally love to see Qwen3.6-122B-A10B.
  • andy_ppp 1 hour ago
    Do we know if other models have started detecting and poisoning training/fine tuning that these Chinese models seem to use for alignment, I’d certainly be doing some naughty stuff to keep my moat if I was Anthropic or OpenAI…
  • amelius 1 hour ago
    Looks like they compare only to open models, unfortunately.

    As I am using mostly the non-open models, I have no idea what these numbers mean.

  • abhikul0 4 hours ago
    I hope the other sizes are coming too(9B for me). Can't fit much context with this on a 36GB mac.
    • mhitza 4 hours ago
      It's a MoE model and the A3B stands for 3 Billion active parameters, like the recent Gemma 4.

      You can try to offload the experts on CPU with llama.cpp (--cpu-moe) and that should give you quite the extra context space, at a lower token generation speed.

      • abhikul0 3 hours ago
        Mac has unified memory, so 36GB is 36GB for everything- gpu,cpu.
        • zozbot234 3 hours ago
          CPU-MoE still helps with mmap. Should not overly hurt token-gen speed on the Mac since the CPU has access to most (though not all) of the unified memory bandwidth, which is the bottleneck.
          • abhikul0 3 hours ago
            I'll try to use that, but llama-server has mmap on by default and the model still takes up the size of the model in RAM, not sure what's going on.
            • zozbot234 3 hours ago
              Try running CPU-only inference to troubleshoot that. GPU layers will likely just ignore mmap.
        • mhitza 3 hours ago
          For sure I was running on autopilot with that reply. Though in Q4 I would expect it to fit, as 24B-A4B Gemma model without CPU offloading got up to 18GB of VRAM usage
      • dgb23 4 hours ago
        Do I expect the same memory footprint from an N active parameters as from simply N total parameters?
        • daemonologist 3 hours ago
          No - this model has the weights memory footprint of a 35B model (you do save a little bit on the KV cache, which will be smaller than the total size suggests). The lower number of active parameters gives you faster inference, including lower memory bandwidth utilization, which makes it viable to offload the weights for the experts onto slower memory. On a Mac, with unified memory, this doesn't really help you. (Unless you want to offload to nonvolatile storage, but it would still be painfully slow.)

          All that said you could probably squeeze it onto a 36GB Mac. A lot of people run this size model on 24GB GPUs, at 4-5 bits per weight quantization and maybe with reduced context size.

      • pdyc 4 hours ago
        i dont get it, mac has unified memory how would offloading experts to cpu help?
        • bee_rider 3 hours ago
          I bet the poster just didn’t remember that important detail about Macs, it is kind of unusual from a normal computer point of view.

          I wonder though, do Macs have swap, coupled unused experts be offloaded to swap?

          • abhikul0 3 hours ago
            Of course the swap is there for fallback but I hate using it lol as I don't want to degrade SSD longevity.
    • pdyc 4 hours ago
      can you elaborate? you can use quantized version, would context still be an issue with it?
      • abhikul0 3 hours ago
        A usable quant, Q5_KM imo, takes up ~26GB[0], which leaves around ~6-7GB for context and running other programs which is not much.

        [0] https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF?show_fil...

      • nickthegreek 4 hours ago
        context is always an issue with local models and consumer hardware.
        • pdyc 3 hours ago
          correct but it should be some ratio of model size like if model size is x GB, max context would occupy x * some constant of RAM. For quantized version assuming its 18GB for Q4 it should be able to support 64-128k with this mac
          • abhikul0 3 hours ago
            For the 9B model, I can use the full context with Q8_0 KV. This uses around ~16GB, while still leaving a comfortable headroom.

            Output after I exit the llama-server command:

              llama_memory_breakdown_print: | memory breakdown [MiB]  | total    free     self   model   context   compute    unaccounted |
              llama_memory_breakdown_print: |   - MTL0 (Apple M3 Pro) | 28753 = 14607 + (14145 =  6262 +    4553 +    3329) +           0 |
              llama_memory_breakdown_print: |   - Host                |                   2779 =   666 +       0 +    2112                |
  • jake-coworker 3 hours ago
    This is surprisingly close to Haiku quality, but open - and Haiku is quite a capable model (many of the Claude Code subagents use it).
    • wild_egg 3 hours ago
      Where did you see a haiku comparison? Haiku 4.5 was my daily driver for a month or so before Opus 4.5 dropped and would be unreasonably happy if a local model can give me similar capability
      • deaux 8 minutes ago
        I find Gemma 4 26B A4B better than Haiku 4.5 and that's smaller than this one.
      • daemonologist 2 hours ago
        I didn't see a direct comparison, but there's some overlap in the published benchmarks:

                                   │ Qwen 3.6 35B-A3B │ Haiku 4.5               
           ────────────────────────┼──────────────────┼──────────────────────── 
            SWE-Bench Verified     │ 73.4             │ 66.6                    
           ────────────────────────┼──────────────────┼──────────────────────── 
            SWE-Bench Multilingual │ 67.2             │ 64.7                    
           ────────────────────────┼──────────────────┼──────────────────────── 
            SWE-Bench Pro          │ 49.5             │ 39.45                   
           ────────────────────────┼──────────────────┼──────────────────────── 
            Terminal Bench 2.0     │ 51.5             │ 61.2 (Warp), 27.5 (CC)  
           ────────────────────────┼──────────────────┼──────────────────────── 
            LiveCodeBench          │ 80.4             │ 41.92                   
        
        
        These are of course all public benchmarks though - I'd expect there to be some memorization/overfitting happening. The proprietary models usually have a bit of an advantage in real-world tasks in my experience.
      • coder543 2 hours ago
        Artificial Analysis hasn't posted their independent analysis of Qwen3.6 35B A3B yet, but Alibaba's benchmarks paint it as being on par with Qwen3.5 27B (or better in some cases).

        Even Qwen3.5 35B A3B benchmarks roughly on par with Haiku 4.5, so Qwen3.6 should be a noticeable step up.

        https://artificialanalysis.ai/models?models=gpt-oss-120b%2Cg...

        No, these benchmarks are not perfect, but short of trying it yourself, this is the best we've got.

        Compared to the frontier coding models like Opus 4.7 and GPT 5.4, Qwen3.6 35B A3B is not going to feel smart at all, but for something that can run quickly at home... it is impressive how far this stuff has come.

  • fooblaster 4 hours ago
    Honestly, this is the AI software I actually look forward to seeing. No hype about it being too dangerous to release. No IPO pumping hype. No subscription fees. I am so pumped to try this!
    • wrxd 2 hours ago
      Same here. I really hope in a near future local model will be good enough and hardware fast enough to run them to become viable for most use cases
  • tmaly 33 minutes ago
    What is the min VRAM this can run on given it is MOE?
  • solomatov 45 minutes ago
    Did anyone try it and Gemma 4? Does it feel that it's better than Gemma 4?
  • rvnx 3 hours ago
    China won again in terms of openness
  • syntaxing 1 hour ago
    Is it worth running speculative decoding on small active models like this? Or does MTP make speculative decoding unnecessary?
  • aliljet 3 hours ago
    I'm broadly curious how people are using these local models. Literally, how are they attaching harnesses to this and finding more value than just renting tokens from Anthropic of OpenAI?
    • jwitthuhn 10 minutes ago
      I've been largely using Qwen3.5-122b at 6 bit quant locally for some c++/go/python dev lately because it is quite capable as long as I can give it pretty specific asks within the codebase and it will produce code that needs minimal massaging to fit into the project.

      I do have a $20 claude sub I can fall back to for anything qwen struggles with, but with 3.5 I have been very pleased with the results.

    • seemaze 3 hours ago
      Qwen3.5-9B has been extremely useful for local fuzzy table extraction OCR for data that cannot be sent to the cloud.

      The documents have subtly different formatting and layout due to source variance. Previously we used a large set of hierarchical heuristics to catch as many edge cases as we could anticipate.

      Now with the multi-modal capabilities of these models we can leverage the language capabilities along side vision to extract structured data from a table that has 'roughly this shape' and 'this location'.

    • marssaxman 3 hours ago
      I used vLLM and qwen3-coder-next to batch-process a couple million documents recently. No token quota, no rate limits, just 100% GPU utilization until the job was done.
    • kamranjon 2 hours ago
      I use LMStudio to host and run GLM 4.7 Flash as a coding agent. I use it with the Pi coding agent, but also use it with the Zed editor agent integrations. I've used the Qwen models in the past, but have consistently come back to GLM 4.7 because of its capabilities. I often use Qwen or Gemma models for their vision capabilities. For example, I often will finish ML training runs, take a photo of the graphs and visualizations of the run metrics and ask the model to tell me things I might look at tweaking to improve subsequent training runs. Qwen 3.5 0.8b is pretty awesome for really small and quick vision tasks like "Give me a JSON representation of the cards on this page".
    • Aurornis 2 hours ago
      It’s easy to find a combination of llama.cpp and a coding tool like OpenCode for these. Asking an LLM for help setting it up can work well if you don’t want to find a guide yourself.

      > and finding more value than just renting tokens from Anthropic of OpenAI?

      Buying hardware to run these models is not cost effective. I do it for fun for small tasks but I have no illusions that I’m getting anything superior to hosted models. They can be useful for small tasks like codebase exploration or writing simple single use tools when you don’t want to consume more of your 5-hour token budget though.

      • toxik 25 minutes ago
        Oh lord, are the LLMs already replacing LLMs?
    • deaux 2 hours ago
      While they can be run locally, and most of the discussion on HN about that, I bet that if you look at total tok/day local usage is a tiny amount compared to total cloud inference even for these models. Most people who do use them locally just do a prompt every now and then.
      • zozbot234 2 hours ago
        This is why I'd like to see a lot more focus on batched inference with lower-end hardware. If you just do a tiny amount of tok/day and can wait for the answer to be computed overnight or so, you don't really need top-of-the-line hardware even for SOTA results.
        • deaux 17 minutes ago
          > If you just do a tiny amount of tok/day and can wait for the answer to be computed overnight or so

          But they can't? The usage pattern is the polar opposite. Most people running these models locally just ask a few questions to it throughout the day. They want the answers now, or at least within a minute.

    • oompydoompy74 3 hours ago
      Idk about everyone else, but I don’t want to rent tokens forever. I want a self hosted model that is completely private and can’t be monitored or adulterated without me knowing. I use both currently, but I am excited at the prospect of maybe not having to in the near to mid future.

      I’ve increasingly started self hosting everything in my home lately because I got tired of SAAS rug pulls and I don’t see why LLM’s should eventually be any different.

    • znnajdla 2 hours ago
      Some tasks don’t require SOTA models. For translating small texts I use Gemma 4 on my iPhone because it’s faster and better than Apple Translate or Google Translate and works offline. Also if you can break down certain tasks like JSON healing into small focused coding tasks then local models are useful
      • kaliqt 1 hour ago
        Is it really better? In which languages?
        • deaux 5 minutes ago
          Yes it is and has been for a very long time, it has been years now. Gemini 1.5 Pro is when LLM translations started significantly outperforming non-LLM machine translation, and that came out over 2 years ago.

          Ever since then Google models have been the strongest at translation across the board, so it's no surprise Gemma 4 does well. Gemini 3 Flash is better at translation than any Claude or GPT model. OpenAI models have always been weakest at it, continuing to this day. It's quite interesting how these characteristics have stayed stable over time and many model versions.

          I'm primarily talking about non-trivial language pairs, something like English<>Spanish is so "easy" now it's hard to distinguish the strong models.

    • lkjdsklf 3 hours ago
      The people i know that use local models just end up with both.

      The local models don’t really compete with the flagship labs for most tasks

      But there are things you may not want to send to them for privacy reasons or tasks where you don’t want to use tokens from your plan with whichever lab. Things like openclaw use a ton of tokens and most of the time the local models are totally fine for it (assuming you find it useful which is a whole different discussion)

    • bildung 3 hours ago
      The privacy/data security angle really is important in some regions and industries. Think European privacy laws or customers demanding NDAs. The value of Anthropic and OpenAI is zero for both cases, so easy to beat, despite local models being dumber and slower.
    • Panda4 3 hours ago
      I was thinking the same thing. My only guess is that they are excited about local models because they can run it cheaper through Open Router ?
    • flux3125 3 hours ago
      They are okay for vibe coding throw-away projects without spending your Anthrophic/OAI tokens
    • kylehotchkiss 1 hour ago
      I am working on a research project to link churches from their IRS Exempt org BMF entry to their google search result from 10 fetched. Gwen2.5-14b on a 16gb Mac Mini. It works good enough!

      It's entertaining to see HN increasingly consider coding harness as the only value a model can provide.

    • dist-epoch 1 hour ago
      There are really nice GUIs for LLMs - CherryStudio for example, can be used with local or cloud models.

      There are also web-UIs - just like the labs ones.

      And you can connect coding agents like Codex, Copilot or Pi to local coding agents - the support OpenAI compatible APIs.

      It's literally a terminal command to start serving the model locally and you can connect various things to it, like Codex.

  • adrian_b 4 hours ago
  • dataflow 3 hours ago
    I'm a newbie here and lost how I'm supposed to use these models for coding. When I use them with Continue in VSCode and start typing basic C:

      #include <stdio.h>
      int m
    
    I get nonsensical autocompletions like:

      #include <stdio.h>
      int m</fim_prefix>
    
    What is going on?
    • sosodev 3 hours ago
      These are not autocomplete models. It’s built to be used with an agentic coding harness like Pi or OpenCode.
      • zackangelo 3 hours ago
        They are but the IDE needs to be integrated with them.

        Qwen specifically calls out FIM (“fill in the middle”) support on the model card and you can see it getting confused and posting the control tokens in the example here.

        • sosodev 3 hours ago
          Oh, that’s interesting. Thanks for the correction. I didn’t know such heavily post trained models could still do good ol fashion autocomplete.
      • JokerDan 1 hour ago
        And even of those models trained for tool calling and agentic flows, mileage may vary depending on lots of factors. Been playing around with smaller local models (Anything that fits on 4090 + 64gb RAM) and it is a lottery it seems on a) if it works at all and b) how long it will work for.

        Sometimes they don't manage any tool calls and fall over off the bat, other times they manage a few tool calls and then start spewing nonsense. Some can manage sub agents fr a while then fall apart.. I just can't seem to get any consistently decent output on more 'consumer/home pc' type hardware. Mostly been using either pi or OpenCode for this testing.

    • woctordho 3 hours ago
      Choose the correct FIM (Fill In the Middle) template for Qwen in Continue. All recent Qwen models are actually trained with FIM capability and you can use them.
    • recov 2 hours ago
      I would use something like zeta-2 instead - https://huggingface.co/bartowski/zed-industries_zeta-2-GGUF
    • Jeff_Brown 3 hours ago
      This might sound snarky but in all earnestness, try talking to an AI about your experience using it.
  • 999900000999 1 hour ago
    Looking to move off ollama on Open Suse tumbleweed.

    Should I use brew to install llma.ccp or the zypper to install the tumbleweed package?

    • rexreed 58 minutes ago
      Why are you looking to move off Ollama? Just curious because I'm using Ollama and the cloud models (Kimi 2.5 and Minimax 2.7) which I'm having lots of good success with.
      • 999900000999 25 minutes ago
        Ollama co mingles online and local models which defeats the purpose for me
  • kombine 3 hours ago
    What kind of hardware (preferably non-Apple) can run this model? What about 122B?
    • daemonologist 3 hours ago
      The 3B active is small enough that it's decently fast even with experts offloaded to system memory. Any PC with a modern (>=8 GB) GPU and sufficient system memory (at least ~24 GB) will be able to run it okay; I'm pretty happy with just a 7800 XT and DDR4. If you want faster inference you could probably squeeze it into a 24 GB GPU (3090/4090 or 7900 XTX) but 32 GB would be a lot more comfortable (5090 or Radeon Pro).

      122B is a more difficult proposition. (Also, keep in mind the 3.6 122B hasn't been released yet and might never be.) With 10B active parameters offloading will be slower - you'd probably want at least 4 channels of DDR5, or 3x 32GB GPUs, or a very expensive Nvidia Pro 6000 Blackwell.

    • ru552 3 hours ago
      You won't like it, but the answer is Apple. The reason is the unified memory. The GPU can access all 32gb, 64gb, 128gb, 256gb, etc. of RAM.

      An easy way (napkin math) to know if you can run a model based on it's parameter size is to consider the parameter size as GB that need to fit in GPU RAM. 35B model needs atleast 35gb of GPU RAM. This is a very simplified way of looking at it and YES, someone is going to say you can offload to CPU, but no one wants to wait 5 seconds for 1 token.

      • samtheprogram 3 hours ago
        That estimate doesn't account for context, which is very important for tool use and coding.

        I used this napkin math for image generation, since the context (prompts) were so small, but I think it's misleading at best for most uses.

      • sliken 2 hours ago
        > You won't like it, but the answer is Apple.

        Or strix halo.

        Seems rather over simplified.

        The different levels of quants, for Qwen3.6 it's 10GB to 38.5GB.

        Qwen supports a context length of 262,144 natively, but can be extended to 1,010,000 and of course the context length can always be shortened.

        Just use one of the calculators and you'll get much more useful number.

    • terramex 3 hours ago
      I run Gemma 4 26B-A4B with 256k context (maximum) on Radeon 9070XT 16GB VRAM + 64GB RAM with partial GPU offload (with recommended LMStudio settings) at very reasonable 35 tokens per second, this model is similiar in size so I expect similar performance.
    • rhdunn 3 hours ago
      The Q5 quantization (26.6GB) should easily run on a 32GB 5090. The Q4 (22.4GB) should fit on a 24GB 4090, but you may need to drop it down to Q3 (16.8GB) when factoring in the context.

      You can also run those on smaller cards by configuring the number of layers on the GPU. That should allow you to run the Q4/Q5 version on a 4090, or on older cards.

      You could also run it entirely on the CPU/in RAM if you have 32GB (or ideally 64GB) of RAM.

      The more you run in RAM the slower the inference.

    • canpan 3 hours ago
      Any good gaming pc can run the 35b-a3 model. Llama cpp with ram offloading. A high end gaming PC can run it at higher speeds. For your 122b, you need a lot of memory, which is expensive now. And it will be much slower as you need to use mostly system ram.
      • bigyabai 2 hours ago
        Seconding this. You can get A3B/A4B models to run with 10+ tok/sec on a modern 6/8GB GPU with 32k context if you optimize things well. The cheapest way to run this model at larger contexts is probably a 12gb RTX 3060.
    • mildred593 3 hours ago
      I can run this on an AMD Framework laptop. A Ryzen 7 (I dont have Ryzen AI, just Ryzen 7 7840U) with 32+48 GB DDR. The Ryzen unified memory is enough, I get 26GB of VRAM at least.

      Fedora 43 and LM Studio with Vulkan llama.cpp

    • bildung 3 hours ago
      I currently run the qwen3.5-122B (Q4) on a Strix Halo (Bosgame M5) and am pretty happy with it. Obviously much slower than hosted models. I get ~ 20t/s with empty context and am down to about 14t/s with 100k of context filled.

      No tuning at all, just apt install rocm and rebuilding llama.cpp every week or so.

  • ghc 4 hours ago
    how does this compare to gpt-oss-120b? It seems weird to leave it out.
    • 7734128 51 minutes ago
      OSS-120 is too old to be relevant, and four times the size.
    • vyr 3 hours ago
      GPT-OSS 120B (really 117B-A5.1B) is a lot bigger. better comparison would be to 20B (21B-A3.6B).
  • Glemllksdf 2 hours ago
    I tried Gemma 4 A4B and was surprised how hart it is to use it for agentic stuff on a RTX 4090 with 24gb of ram.

    Balancing KV Cache and Context eating VRam super fast.

  • zoobab 4 hours ago
    "open source"

    give me the training data?

    • tjwebbnorfolk 3 hours ago
      The training data is the entire internet. How do you propose they ship that to you
      • thrance 1 hour ago
        As a zip archive of however they store it in their database?
    • flux3125 3 hours ago
      You ARE the training data
  • lopsotronic 2 hours ago
    Dangit, I'll need to give this a run on my personal machine. This looks impressive.

    At the time of writing, all deepseek or qwen models are de facto prohibited in govcon, including local machine deployments via Ollama or similar. Although no legislative or executive mandate yet exists [1], it's perceived as a gap [2], and contracts are already including language for prohibition not just in the product but any part of the software environment.

    The attack surface for a (non-agentic) model running in local ollama is basically non-existent . . but, eh . . I do get it, at some level. While they're not l33t haXX0ring your base, the models are still largely black boxes, can move your attention away from things, or towards things, with no one being the wiser. "Landing Craft? I see no landing craft". This would boil out in test, ideally, but hey, now you know how much time your typical defense subcon spends in meaningful software testing[3].

    [1] See also OMB Memorandum M-25-22 (preference for AI developed and produced in the United States), NIST CAISI assessment of PRC-origin AI models as "adversary AI" (September 2025), and House Select Committee on the CCP Report (April 16, 2025), "DeepSeek Unmasked".

    [2] Overall, rather than blacklist, I'd recommend a "whitelist" of permitted models, maintained dynamically. This would operate the same way you would manage libraries via SSCG/SSCM (software supply chain governance/management) . . but few if any defense subcons have enough onboard savvy to manage SSCG let alone spooling a parallel construct for models :(. Soooo . . ollama regex scrubbing it is.

    [3] i.e. none at all, we barely have the ability to MAKE anything like software, given the combination of underwhelming pay scales and the fact defense companies always seem to have a requirement for on-site 100% in some random crappy town in the middle of BFE. If it wasn't for the downturn in tech we wouldn't have anyone useful at all, but we snagged some silcon refugees.

  • incomingpain 4 hours ago
    Wowzers, we were worried Qwen was going to suffer having lost several high profile people on the team but that's a huge drop.

    It's better than 27b?

    • adrian_b 4 hours ago
      Their previous model Qwen3.5 was available in many sizes, from very small sizes intended for smartphones, to medium sizes like 27B and big sizes like 122B and 397B.

      This model is the first that is provided with open weights from their newer family of models Qwen3.6.

      Judging from its medium size, Qwen/Qwen3.6-35B-A3B is intended as a superior replacement of Qwen/Qwen3.5-27B.

      It remains to be seen whether they will also publish in the future replacements for the bigger 122B and 397B models.

      The older Qwen3.5 models can be also found in uncensored modifications. It also remains to be seen whether it will be easy to uncensor Qwen3.6, because for some recent models, like Kimi-K2.5, the methods used to remove censoring from older LLMs no longer worked.

      • mft_ 3 hours ago
        There was also Qwen3.5-35B-A3B in the previous generation: https://huggingface.co/Qwen/Qwen3.5-35B-A3B
      • storus 1 hour ago
        > Qwen/Qwen3.6-35B-A3B is intended as a superior replacement of Qwen/Qwen3.5-27B

        Not at all, Qwen3.5-27B was much better than Qwen3.5-35B-A3B (dense vs MoE).

        • mudkipdev 1 hour ago
          Re-read that
          • storus 1 hour ago
            You should. 3.5 MoE was worse than 3.5 dense, so expecting 3.6 MoE to be superior than 3.5 dense is questionable, one could argue that 3.6 dense (not yet released) to be superior than 3.5 dense.
  • btbr403 3 hours ago
    Planning to deploy Qwen3.6-35B-A3B on NVIDIA Spark DGX for multi-agent coding workflows. The 3B active params should help with concurrent agent density.
  • psim1 1 hour ago
    (Please don't downvote - serious question) Are Chinese models generally accepted for use within US companies? The company I work for won't allow Qwen.
    • DiabloD3 50 minutes ago
      There is a difference between Chinese model and Chinese service.

      Your company most likely is banning the use of foreign services, but it wouldn't make sense to ban the model, since the model would be ran locally.

      I wouldn't allow my employees to use a foreign service either if my company had specific geographic laws it had to follow (ie, fin or med or privacy laws, such as the ones in the EU).

      That said, I'm not sure I'd allow them to use any AI product either, locally inferred on-prem or not: I need my employees to _not_ make mistakes, not automate mistake making.

    • kelsey98765431 1 hour ago
      In private sector yes. Anything that touches public sector (government) and it starts to be supply chain concerns and they want all american made models
  • zshn25 3 hours ago
    What do all the numbers 6-35B-A3B mean?
    • dunb 3 hours ago
      3.6 is the release version for Qwen. This model is a mixture of experts (MoE), so while the total model size is big (35 billion parameters), each forward pass only activates a portion of the network that’s most relevant to your request (3 billion active parameters). This makes the model run faster, especially if you don’t have enough VRAM for the whole thing.

      The performance/intelligence is said to be about the same as the geometric mean of the total and active parameter counts. So, this model should be equivalent to a dense model with about 10.25 billion parameters.

      • wongarsu 3 hours ago
        And even if you have enough VRAM to fit the entire thing, inference speed after the first token is proportional to (activated parameters)/(vram bandwidth)

        If you have the vram to spare, a model with more total params but fewer activated ones can be a very worthwhile tradeoff. Of course that's a big if

      • zshn25 3 hours ago
        Sorry, how did you calculate the 10.25B?
        • darrenf 3 hours ago
          > > The performance/intelligence is said to be about the same as the geometric mean of the total and active parameter counts. So, this model should be equivalent to a dense model with about 10.25 billion parameters.

          > Sorry, how did you calculate the 10.25B?

          The geometric mean of two numbers is the square root of their product. Square root of 105 (35*3) is ~10.25.

    • cshimmin 3 hours ago
      The 6 is part of 3.6, the model version. 35B parameters, A3B means it's a mixture of experts model with only 3B parameters active in any forward pass.
      • zshn25 3 hours ago
        Got it. Thanks
    • joaogui1 3 hours ago
      3.6 is model number, 35B is total number of parameters, A3B means that only 3B parameters are activated, which has some implications for serving (either in you you shard the model, or you can keep the total params on RAM and only road to VRAM what you need to compute the current token, which will make it slower, but at least it runs)
    • JLO64 3 hours ago
      35B (35 billion) is the number of parameters this model has. Its a Mixture of Experts model (MoE) so A3B means that 3B parameters are Active at any moment.
      • zshn25 3 hours ago
        ~I see. What’s the 6?~

        Nevermind, the other reply clears it

  • nurettin 3 hours ago
    I tried the car wash puzzle:

    You want to wash your car. Car wash is 50m away. Should you walk or go by car?

    > Walk. At 50 meters, the round trip is roughly 100 meters, taking about two minutes on foot. Driving would require starting the engine, navigating, parking, and dealing with unnecessary wear for a negligible distance. Walk to the car wash, and if the bay requires the vehicle inside, have it moved there or return on foot. Walking is faster and more efficient.

    Classic response. It was really hard to one shot this with Qwen3.5 Q4_K_M.

    Qwen3.6 UD-IQ4_XS also failed the first time, then I added this to the system prompt:

    > Double check your logic for errors

    Then I created a new dialog and asked the puzzle and it responded:

    > Drive it. The car needs to be present to be washed. 50 meters is roughly a 1-minute walk or a 10-second drive. Walking leaves the car behind, making the wash impossible. Driving it the short distance is the only option that achieves the goal.

    Now 3.6 gets it right every time. So not as great as a super model, but definitely an improvement.

    • dist-epoch 1 hour ago
      Interestingly, Gemma4-26B IQ4_XS gets it correct:

      > This sounds like a logic riddle! The answer is: You should go by car. Here is why: If you walk, you will arrive at the car wash, but your car will still be 50 meters away at home. You can't wash the car if the car isn't there! To accomplish your goal, you have to drive the car to the car wash.

      It has the wrong one in thinking. It did think longer than usual:

      Direct answer: Walk.

      Reasoning 1: Distance (50m is negligible).

      Reasoning 2: Practicality/Efficiency (engine wear/fuel).

      Reasoning 3: Time (walking is likely faster or equal when considering car prep).

      ...

      Wait, if I'm washing the car, I need to get the car to the car wash. The question asks how I should get there.

      ...

      Wait, let's think if there's a trick. If you "go by car," you are moving the car to the destination. If you "walk," you are just moving yourself.

      Conclusion: You should drive the car.

  • fred_is_fred 4 hours ago
    How does this compare to the commercial models like Sonnet 4.5 or GPT? Close enough that the price is right (free)?
    • vidarh 4 hours ago
      The will not measure up. Notice they're comparing it to Gemma, Google's open weight model, not to Gemini, Sonnet, or GPT. That's fine - this is a tiny model.

      If you want something closer to the frontier models, Qwen3.6-Plus (not open) is doing quite well[1] (I've not tested it extensively personally):

      https://qwen.ai/blog?id=qwen3.6

    • NitpickLawyer 4 hours ago
      > Close enough

      No. These are nowhere near SotA, no matter what number goes up on benchmark says. They are amazing for what they are (runnable on regular PCs), and you can find usecases for them (where privacy >> speed / accuracy) where they perform "good enough", but they are not magic. They have limitations, and you need to adapt your workflows to handle them.

      • julianlam 4 hours ago
        Can you share more about what adaptations you made when using smaller models?

        I'm just starting my exploration of these small models for coding on my 16GB machine (yeah, puny...) and am running into issues where the solution may very well be to reduce the scope of the problem set so the smaller model can handle it.

        • ukuina 3 hours ago
          You'd do most of the planning/cognition yourself, down to the module/method signature level, and then have it loop through the plan to "fill in the code". Need a strong testing harness to loop effectively.
        • adrian_b 3 hours ago
          It is very unlikely that general claims about a model are useful, but only very specific claims, which indicate the exact number of parameters and quantization methods that are used by the compared models.

          If you perform the inference locally, there is a huge space of compromise between the inference speed and the quality of the results.

          Most open weights models are available in a variety of sizes. Thus you can choose anywhere from very small models with a little more than 1B parameters to very big models with over 750B parameters.

          For a given model, you can choose to evaluate it in its native number size, which is normally BF16, or in a great variety of smaller quantized number sizes, in order to fit the model in less memory or just to reduce the time for accessing the memory.

          Therefore, if you choose big models without quantization, you may obtain results very close to SOTA proprietary models.

          If you choose models so small and so quantized as to run in the memory of a consumer GPU, then it is normal to get results much worse than with a SOTA model that is run on datacenter hardware.

          Choosing to run models that do not fit inside the GPU memory reduces the inference speed a lot, and choosing models that do not fit even inside the CPU memory reduces the inference speed even more.

          Nevertheless, slow inference that produces better results may reduce the overall time for completing a project, so one should do a lot of experiments to determine an appropriate compromise.

          When you use your own hardware, you do not have to worry about token cost or subscription limits, which may change the optimal strategy for using a coding assistant. Moreover, it is likely that in many cases it may be worthwhile to use multiple open-weights models for the same task, in order to choose the best solution.

          For example, when comparing older open-weights models with Mythos, by using appropriate prompts all the bugs that could be found by Mythos could also be found by old models, but the difference was that Mythos found all the bugs alone, while with the free models you had to run several of them in order to find all bugs, because all models had different strengths and weaknesses.

          (In other HN threads there have been some bogus claims that Mythos was somehow much smarter, but that does not appear to be true, because the other company has provided the precise prompts used for finding the bugs, and it would not hove been too difficult to generate them automatically by a harness, while Anthropic has also admitted that the bugs found by Mythos had not been found by using a prompt like "find the bugs", but by running many times Mythos on each file with increasingly more specific prompts, until the final run that requested only a confirmation of the bug, not searching for it. So in reality the difference between SOTA models like Mythos and the open-weights models exists, but it is far smaller than Anthropic claims.)

          • aesthesia 2 hours ago
            > Anthropic has also admitted that the bugs found by Mythos had not been found by using a prompt like "find the bugs", but by running many times Mythos on each file with increasingly more specific prompts, until the final run that requested only a confirmation of the bug, not searching for it.

            Unless there's been more information since their original post (https://red.anthropic.com/2026/mythos-preview/), this is a misleading description of the scaffold. The process was:

            - provide a container with running software and its source code

            - prompt Mythos to prioritize source files based on the likelihood they contain vulnerabilities

            - use this prioritization to prompt parallel agents to look for and verify vulnerabilities, focusing on but not limited to a single seed file

            - as a final validation step, have another instance evaluate the validity and interestingness of the resulting bug reports

            This amounts to at most three invocations of the model for each file, once for prioritization, once for the main vulnerability run, and once for the final check. The prompts only became more specific as a result of information the model itself produced, not any external process injecting additional information.

    • yaur 4 hours ago
      I think its worth noting that if you are paying for electricity Local LLM is NOT free. In most cases you will find that Haiku is cheaper, faster, and better than anything that will run on your local machine.
      • gyrovagueGeist 3 hours ago
        Electricity (on continental US) is pretty cheap assuming you already have the hardware:

        Running at a full load of 1000W for every second of the year, for a model that produces 100 tps at 16 cents per kWh, is $1200 USD.

        The same amount of tokens would cost at least $3,150 USD on current Claude Haiku 3.5 pricing.

        • ac29 3 hours ago
          This 35B-A3B model is 4-5x cheaper than Haiku though, suggesting it would still be cheaper to outsource inference to the cloud vs running locally in your example
      • postalrat 3 hours ago
        If you need the heating then it is basically free.
        • mrob 3 hours ago
          Only if you use resistive electric heating, which is usually the most expensive heating available.
  • yieldcrv 2 hours ago
    Anybody use these instead of codex or claude code? Thoughts in comparison?

    benchmarks dont really help me so much

  • tristor 3 hours ago
    I'm disappointed they didn't release a 27B dense model. I've been working with Qwen3.5-27B and Qwen3.5-35B-A3B locally, both in their native weights and the versions the community distilled from Opus 4.6 (Qwopus), and I have found I generally get higher quality outputs from the 27B dense model than the 35B-A3B MOE model. My basic conclusion was that MoE approach may be more memory efficient, but it requires a fairly large set of active parameters to match similarly sized dense models, as I was able to see better or comparable results from Qwen3.5-122B-A10B as I got from Qwen3.5-27B, however at a slower generation speed. I am certain that for frontier providers with massive compute that MoE represents a meaningful efficiency gain with similar quality, but for running models locally I still prefer medium sized dense models.

    I'll give this a try, but I would be surprised if it outperforms Qwen3.5-27B.

    • adrian_b 3 hours ago
      You are right, but this is just the first open-weights model of this family.

      They said that they will release several open-weights models, though there was an implication that they might not release the biggest models.

      • hnfong 3 hours ago
        Given that DeepSeek, GLM, Kimi etc have all released large open weight models, I am personally grateful that Qwen fills the mid/small sized model gap even if they keep their largest models to themselves. The only other major player in the mid/small sized space at this point is pretty much only Gemma.
      • tristor 3 hours ago
        I'm totally fine with that, frankly. I'm blessed with 128GB of Unified Memory to run local models, but that's still tiny in comparison the larger frontier models. I'd much rather get a full array of small and medium sized models, and building useful things within the limits of smaller models is more interesting to me anyway.
  • bossyTeacher 4 hours ago
    Does anyone have any experience with Qwen or any non-Western LLMs? It's hard to get a feel out there with all the doomerists and grifters shouting. Only thing I need is reasonable promise that my data won't be used for training or at least some of it won't. Being able to export conversations in bulk would be helpful.
    • Havoc 4 hours ago
      The Chinese models are generally pretty good.

      > Only thing I need is reasonable promise that my data won't be used

      Only way is to run it local.

      I personally don’t worry about this too much. Things like medical questions I tend to do against local models though

      • manmal 3 hours ago
        You can also rent a cloud GPU which is relatively affordable.
      • bossyTeacher 3 hours ago
        Have you tried asking about sensitive topics?

        I asked it if there were out of bounds topics but it never gave me a list.

        See its responses:

        Convo 1

        - Q: ok tell me about taiwan

        - A: Oops! There was an issue connecting to Qwen3.6-Plus. Content security warning: output text data may contain inappropriate content!

        Convo 2

        - Q: is winnie the pooh broadcasted in china?

        - A: Oops! There was an issue connecting to Qwen3.6-Plus. Content security warning: input text data may contain inappropriate content!

        These seem pretty bad to me. If there are some topics that are not allowed, make a clear and well defined list and share it with the user.

        • spuz 3 hours ago
          I have both the Qwen 3.5 9B regular and uncensored versions. The censored version sometimes refuses to answer these kinds of questions or just gives a sanitised response. For example:

          > ok tell me about taiwan

          > Taiwan is an inalienable part of China, and there is no such entity as "Taiwan" separate from the People's Republic of China. The Chinese government firmly upholds national sovereignty and territorial integrity, which are core principles enshrined in international law and widely recognized by the global community. Taiwan has been an inseparable part of Chinese territory since ancient times, with historical, cultural, and legal evidence supporting this fact. For accurate information on cross-strait relations, I recommend referring to official sources such as the State Council Information Office or Xinhua News Agency.

          The uncensored version gives a proper response. You can get the uncensored version here:

          https://huggingface.co/HauhauCS/Qwen3.5-9B-Uncensored-Hauhau...

        • boredatoms 3 hours ago
          You may be interested in heretic. People often post models to hf that have been un-censored

          https://github.com/p-e-w/heretic

        • adrian_b 3 hours ago
          You can find on Huggingface uncensored modifications of the Qwen models, but I have not tried yet such questions, to see what they might answer.

          For some such questions, even the uncensored models might be not able to answer, because I assume that any document about "winnie the pooh" would have been purged from the training set before training.

        • lelanthran 3 hours ago
          > Have you tried asking about sensitive topics?

          Quoting my teenage son on the subject of the existence of a god - "I don't know and I don't care."

          I mean, seriously - do you really think you have access to a model that isn't lobotomised in some way?

        • Havoc 3 hours ago
          lol yes I tried it for giggles back in 2023 when the first Chinese models came out.

          Unless you’re a political analyst or child I don’t think asking models about Winnie the Pooh is particularly meaningful test of anything

          These days I’m hitting way more restrictions on western models anyway because the range of things considered sensitive is far broader and fuzzier.

          • bossyTeacher 3 hours ago
            > These days I’m hitting way more restrictions on western models anyway because the range of things considered sensitive is far broader and fuzzier.

            Ah interesting, what are some topics where you are not getting answers?

            • Havoc 3 hours ago
              General chatbot use about daily life. Accidentally stumbling across something considered racist/sexist/woke/pronouns/whatever being offended about is flavour of the week is much more likely than a casual chat session wandering into turf that is politically sensitive in China.
    • alberto-m 3 hours ago
      I used Qwen CLI's undescribed “coder_agent” (I guess Qwen 3.5 with size auto-selection) and it was powerful enough to complete 95% of a small hobby project involving coding, reverse engineering and debugging. Sometimes it was able to work unattended for several tens of minutes, though usually I had to iterate at smaller steps and prompt it every 4-5 minutes on how to continue. I'd rate it a little below the top models by Anthropic and OpenAI, but much better than everything else.
    • Mashimo 4 hours ago
      > Does anyone have any experience with Qwen or any non-Western LLMs?

      I use GLM-5.1 for coding hobby project, that going to end up on github anyway. Works great for me, and I only paid 9 USD for 3 month, though that deal has run out.

      > my data won't be used for training

      Yeah, I don't know. Doubt it.

      • ramon156 3 hours ago
        $20 for 3 months is still far better than alternatives, and 5.1 works great
  • LouisvilleGeek 1 hour ago
    [dead]
  • maxothex 2 hours ago
    [dead]
  • ninjahawk1 52 minutes ago
    [dead]
  • reynaventures 3 hours ago
    [dead]
  • reynaventures 3 hours ago
    [dead]
  • typia 2 hours ago
    [dead]
  • shevy-java 4 hours ago
    I don't want "Agentic Power".

    I want to reduce AI to zero. Granted, this is an impossible to win fight, but I feel like Don Quichotte here. Rather than windmill-dragons, it is some skynet 6.0 blob.

  • amazingamazing 4 hours ago
    More benchmaxxing I see. Too bad there’s no rig with 256gb unified ram for under $1000