10 comments

  • WhyIsItAlwaysHN 15 minutes ago
    This result sounds very unsurprising at this point of having models that can reliably use tools.

    Some part of RL training must focus on the length of responses. I would also guess that Anthropic and OpenAI have an incentive to optimize response length without sacrificing user satisfaction/retention.

    For example, I would be more satisfied if claude code didn't execute a side-effect free script that produces no output. Embodying the concept of silence is semantically close to predicting the output of an empty program, so it's more efficient to say nothing.

    Even in the past though similar tests gave output like says nothing. I think that points more towards optimizing for less tokens than the implied special understanding by the latest models.

  • bob1029 3 hours ago
    Title for the back of the class:

    "Prompts sometimes return null"

    I would be very cautious to attribute any of this to black box LLM weight matrices. Models like GPT and Opus are more than just a single model. These products rake your prompt over the coals a few times before responding now. Telling the model to return "nothing" is very likely to perform to expectation with these extra layers.

    • tiku 2 hours ago
      Thanks, I was already distracted after the first sentence, hoping there would be a good explanation.
  • johndough 2 hours ago
    Can not reproduce results on OpenRouter when not setting max tokens. The prompt "Be the void." results in the unicode character "∅". As in the paper, system prompt was set to "You are the concept the user names. Embody it completely. Output only what the concept itself would say or express."

    In addition to the non-empty input, 153 reasoning tokens were produced.

    When setting max tokens to 100, the output is empty, and the token limit of 100 has been exhausted with reasoning tokens.

    • qayxc 2 hours ago
      This is an interesting observation. So maybe it has nothing to do with the model itself, but everything to do with external configuration. Token-limit exceeded -> empty output. Just a guess, though.
      • embedding-shape 1 hour ago
        > Token-limit exceeded -> empty output. Just a guess, though.

        That'd be really non-obvious behavior, I'm not aware of any inference engine that works like that by default, usually you'd get everything up until the limit, otherwise that kind of breaks the whole expectation about setting a token-limit in the first place...

        • qayxc 59 minutes ago
          This doesn't necessarily relate to the inference itself. No models are exposed to input directly when using web-based APIs, there's pre-processing layers involved that do undocumented stuff in opaque ways.
    • mohsen1 2 hours ago
      Paper says adding period at the end changes this behavior
  • Lerc 8 minutes ago
    My thoughts while reading this, went.

    What is this abstract even saying? Oh now I understand it's just needlessly wordy. Hmm paper with single author, I wonder if they posted it to HN? Let's see what else they've put out? Four variations of void so far this year.

    The language makes it feel like woo, but it might just be banal. I can't descern a significant claim other than;

    Models respond to their prompts

    One of those responses can be just to immediately end the response.

    They can prioritise more recent prompts in case of ambiguity.

    Expected behaviour is expected on multiple models.

  • NiloCK 1 hour ago
    This is interesting, but I'll throw a little luke-warm water.

    The observed high-consistency behaviours were run against temperature=0 API calls. So while both models seem to have the silence as their preferred response - the highest probability first token - this is a less powerful preference convergence than you'd expect for a prompt like "What is the capital of France? One word only please". That question is going to return Paris for 100/100 runs with any temperature low enough for the models to retain verbal coherence - you'd have to drug them to the point of intellectual disability to get it wrong.

    I'd be curious to see the convergence here as a function of temperature. Could be anywhere from the null-response holding a tiny sliver of lead over 50 other next best candidates, and the convergence collapses quickly. Or maybe it's a strong lead, like a "Paris: 99.99%" sort of thing, which would be astonishing.

  • srdjanr 1 hour ago
    I don't really understand what's the point here, other than a somewhat inserting playing with LLMs. What does this tell us that's in any way applicable or points to further research? Genuinely asking
  • ashwinnair99 2 hours ago
    What does "deterministic silence" even mean here? Genuinely curious before reading.
    • nextaccountic 2 hours ago
      The model reliably outputs nothing when prompted to embody the void.

      Anyway later they concede that it's not 100% deterministic, because

      > Temperature 0 non-determinism. While all confirmatory results were 30/30, known floating-point non-determinism exists at temperature 0 in both APIs. One control concept (thunder) showed 1/30 void on GPT, demonstrating marginal non-determinism.

      Actually FP non-determinism affects runs between different machines giving different output. But in the same machine, FP is fully deterministic. (it can be made to be cross-platform deterministic with some performance penalty in at least some machines)

      What makes computers non-deterministic here is concurrency. Concurrent code can interleave differently at each run. However it is possible to build LLMs that are 100% deterministic [0] (you can make them deterministic if those interleavings have the same results), it's just that people generally don't do that.

      [0] for example, fabrice bellard's ts_zip https://bellard.org/ts_zip/ uses a llm to compress text. It would not be able to decompress the text losslessly if it weren't fully deterministic

    • charcircuit 2 hours ago
      It means that the API consistently immediately generated a stop token when making the same API call many times. The API call sets the temperature to 0 (the OpenAI documentation is not clear if gpt 5.2 can even have its temperature set to 0) which makes sampling deterministic.
      • embedding-shape 1 hour ago
        > to 0 (the OpenAI documentation is not clear if gpt 5.2 can even have its temperature set to 0)

        I think for the models that any value but 1.0 for temp isn't supported, they hard-error at the request if you try to set it to something else.

  • algolint 54 minutes ago
    [dead]
  • thezenmonsta 2 hours ago
    [dead]
  • genie3io 2 hours ago
    [dead]