9 comments

  • vocx2tx 2 hours ago
    But still a kludge. Better: use something equivalent to Go's testing/synctest[0] package, which lets you write tests that run in a bubble where time is fixed and deterministic.

    [0] https://pkg.go.dev/testing/synctest

    • dathinab 2 hours ago
      in general

      - generating test data in a realistic way is often better then hard coding it (also makes it easier to add prop testing or similar)

      - make the current time an input to you functions (i.e. the whole old prefer pure functions discussion). This isn't just making things more testable it also can matter to make sure: 1. one unit of logic sees the same time 2. avoid unneeded calls to `now()` (only rarely matters, but can matter)

      • WorldMaker 1 hour ago
        Similarly, I like .NET's TimeProvider abstraction [1]. You pass a TimeProvider to your functions. At runtime you can provide the default TimeProvider.System. When testing FakeTimeProvider has a lot of handy tools to do deterministic testing.

        One of the further benefits of .NET's TimeProvider is that it can also be provided to low level async methods like `await Task.Delay(time, timeProvider, cancellationToken)` which also increases the testability of general asynchronous code in a deterministic sandbox once you learn to pass TimeProvider to even low level calls that take an optional one.

        [1] https://learn.microsoft.com/en-us/dotnet/standard/datetime/t...

      • 0x457 1 hour ago
        Also, if you do use `now()` in this case you can always do `now() + SomeDistantDuration`
    • bombcar 37 minutes ago
      This can cause other types of bugs to go unnoticed, such as leap year fun (if you handle 100 years, did you handle the 400th year?).
  • andai 3 hours ago
    Interesting, from the title I thought it was intentional, as a "forced code review." Apparently not, but now I really like that idea!
    • adrianpike 3 hours ago
      We've done that at a few places I've been at - it's tricky because if the failure is too short its just annoying toil, but if it's too long there's risk of losing context and having to remember what the heck we were thinking.

      Overall it's still net positive for me in certain cases of enforcing things to be temporary, or at least revisited.

      • bombcar 39 minutes ago
        Which is why SSL certs are now 47 days long or whatever it is.
  • Alupis 3 hours ago
    Just skimmed the PR, I'm sure the author knows more than I - but why hard code a date at all? Why not do something like `today + 1 year`?
    • johanvts 3 hours ago
      That introduces dependency of a clock which might be undesirable, just had a similar problem where i also went for hardcoding for that reason.
      • cogman10 2 hours ago
        There's already a clock dependency. The test fails because of that.
      • rcxdude 3 hours ago
        Arguably you should have a fixed start date for any given test, but time is quite hard to abstract out like that (there's enough time APIs you'd want OS support, but linux for example doesn't support clock namespaces for the realtime clock, only a few monotonic clocks)
    • whynotmaybe 3 hours ago
      Because it should be `today + 1 year + randomInt(1,42) days`.

      Always include some randomness in test values.

      • zelos 1 hour ago
        Generate fuzz tests using random values with a fixed seed, sure, but using random values in tests that run on CI seems like a recipe for hard-to-reproduce flaky builds unless you have really good logging.
      • rcxdude 3 hours ago
        Not a good idea for CI tests. It will just make things flaky and gum up your PR/release process. Randomness or any form of nondeterminism should be in a different set of fuzzing tests (if you must use an RNG, a deterministic one is fine for CI).
        • whynotmaybe 2 hours ago
          That's why it's "randomInt(1,42)", not "randomLong()".
        • dathinab 2 hours ago
          if it makes thing flaky

          then it actually is a huge success

          because it found a bug you overlooked in both impl. and tests

          at least iff we speak about unit tests

          • jstanley 3 minutes ago
            Only if it becomes obvious why it is flaky. If it's just sometimes broken but really hard to reproduce then it just gets piled on to the background level of flakiness and never gets fixed.
      • CoastalCoder 2 hours ago
        > Always include some randomness in test values.

        If this isn't a joke, I'd be very interested in the reasoning behind that statement, and whether or not there are some qualifications on when it applies.

        • dathinab 2 hours ago
          humans are very good at overlooking edge cases, off by one errors etc.

          so if you generate test data randomly you have a higher chance of "accidentally" running into overlooked edge cases

          you could say there is a "adding more random -> cost" ladder like

          - no randomness, no cost, nothing gained

          - a bit of randomness, very small cost, very rarely beneficial (<- doable in unit tests)

          - (limited) prop testing, high cost (test runs multiple times with many random values), decent chance to find incorrect edge cases (<- can be barely doable in unit tests, if limited enough, often feature gates as too expensive)

          - (full) prop testing/fuzzing, very very high cost, very high chance incorrect edge cases are found IFF the domain isn't too large (<- a full test run might need days to complete)

          • ssdspoimdsjvv 2 hours ago
            I've learnt that if a test only fails sometimes, it can take a long time for somebody to actually investigate the cause,in the meantime it's written off as just another flaky test. If there really is a bug, it will probably surface sooner in production than it gets fixed.
            • dathinab 1 hour ago
              sadly yes

              people often take flaky test way less serious then they should

              I had multiple bigger production issues which had been caught by tests >1 month before they happened in production, but where written off as flaky tests (ironically this was also not related to any random test data but more load/race condition related things which failed when too many tests which created full separate tenants for isolation happened to run at the same time).

              And in some CI environments flaky test are too painful, so using "actual" random data isn't viable and a fixed seed has to be used on CI (that is if you can, because too much libs/tools/etc. do not allow that). At least for "merge approval" runs. That many CI systems suck badly the moment you project and team size isn't around the size of a toy project doesn't help either.

          • SkyBelow 18 minutes ago
            Can't one get randomness and determinism at the same time? Randomly generate the data, but do so when building the test, not when running the test. This way something that fails will consistently fail, but you also have better chances of finding the missed edge cases that humans would overlook. Seeded randomness might also be great, as it is far cleaner to generate and expand/update/redo, but still deterministic when it comes time to debug an issue.
        • whynotmaybe 2 hours ago
          Must be some Mandela effect about some TDD documentation I read a long time ago.

          If you test math_add(1,2) and it returns 3, you don't know if the code does `return 3` or `return x+y`.

          It seems I might need to revise my view.

          • Izkata 2 hours ago
            I vaguely remember the same advice, it's pretty old. How you use the randomness is test specific, for example in math_add() it'd be something like:

              jitter = random(5)
              assertEqual(3 + jitter, math_add(1, 2 + jitter))
            
            If it was math_multiply(), then adding the jitter would fail - that would have to be multiplied in.

            Nowadays I think this would be done with fuzzing/constraint tests, where you define "this relation must hold true" in a more structured way so the framework can choose random values, test more at once, and give better failure messages.

          • ajs1998 2 hours ago
            Randomness is useful if you expect your code to do the correct thing with some probability. You test lots of different samples and if they fail more than you expect then you should review the code. You wouldn't test dynamic random samples of add(x, y) because you wouldn't expect it to always return 3, but in this case it wouldn't hurt.
      • devin 3 hours ago
        Are you joking? This is the kind of thing that leads to flaky tests. I was always counseled against the use of randomness in my tests, unless we're talking generative testing like quickcheck.
        • dathinab 1 hour ago
          or, maybe, there is something hugely wrong with your code, review pipeline or tests if adding randomness to unit test values makes your tests flaky and this is a good way to find it
          • devin 10 minutes ago
            or, maybe, it signals insufficient thought about the boundary conditions that should or shouldn't trigger test failures.

            doing random things to hopefully get a failure is fine if there's an actual purpose to it, but putting random values all over the place in the hopes it reveals a problem in your CI pipeline or something seems like a real weak reason to do it.

        • whynotmaybe 3 hours ago
          `today` is random.
          • InsideOutSanta 50 minutes ago
            If "today" were random, our universe would be pretty fricken weird.
          • Eldt 2 hours ago
            It's dynamic, but it certainly isn't random, considering it follows a consistent sequence
      • andai 3 hours ago
        Interesting, haven't heard this before (I don't know much about testing). Is this kind of like fuzzing?
        • whynotmaybe 3 hours ago
          I recently had race condition that made tests randomly fail because one test created "data_1" and another test also created "data_1".

          - Test 1 -> set data_1 with value 1

          - Test 1 -> `do some magic`

          - Test 1 -> assert value 1 + magic = expected value

          - Test 2 -> set data_1 with value 2

          But this can fail if `do some magic` is slow and Test 2 starts before Test 1 asserts.

          So I can either stop parallelism, but in real life parallelism exists, or ensure that each test as random id, just like it would happen in real life.

  • bombcar 4 hours ago
    Any time constant will be exceeded someday.

    An impossibly short period of time after the heat death of the universe on a system that shouldn’t even exist: ERROR TIME_TEST FAILURE

    • unkl_ 4 hours ago
      Posted on HN in 2126: 100 years ago, someone wrote a test for servo that included an expiry in 2126
      • jerf 4 hours ago
        I've got some tests in active code bases that are using the end of 32-bit Unix time as "we'll never get there". That's not because the devs were lazy, these tests date from when that was the best they could possibly do. They're on track to be cycled out well before then (hopefully this year), so, hopefully, they'll be right that their code "won't get there"... but then there's the testing and code that assumes this that I don't know about that may still be a problem.

        "End of Unix time" is under 12 years now, so, a bit longer than the time frame of this test, but we're coming up on it.

        • bombcar 2 hours ago
          I seem to recall much smugness on Slashdot around the "idiot winblows users limited by DOS y2k" and how the time_t was "so much better". Even then a few were prophesying that it would come bite us eventually ...
      • yetihehe 4 hours ago
        Now I feel bad for using (system foundation timestamp)+100 years as end of "forever" ownership relations in one of my systems. Looking now, it's only 89 years left. I think I should use nulls instead.
        • prerok 1 hour ago
          Well, it won't be your problem /j
    • tacostakohashi 1 hour ago
      Yep - that's why I always choose my time constants to be during years when I will be retired, or possibly dead.

      If you're going to kick the can down the road, why not kick it pretty far?

    • fny 4 hours ago
      Who here remembers the fud of Y2K?
      • acuozzo 4 hours ago
        Don't mistake a defused bomb for a dud.

        https://en.wikipedia.org/wiki/Preparedness_paradox

        • arduanika 2 hours ago
          Thanks! I think about this concept a lot, and now I know there's a name for it. "Preparedness paradox". I'll have to remember that.

          And to your point, Y2K is right there on the wiki page for it.

      • philipallstar 4 hours ago
        I remember the reality of all the work needed to avoid issues.
      • jghn 2 hours ago
        As others have stated, the lack of visible effect is not the same thing as there never having been a land mine in the first place.

        I can tell you anecdotally that on 12/31/2000 I was hanging with some friends. At 12PM UTC we turned on the footage from London. At first it appeared to be a fiery hellscape armageddon. while it turned out to just be fireworks with a wierd camera angle, there was a moment where we were concerned something was actually happening. Most of us in the room were technologists, and while we figured it'd all be no big deal, we weren't *sure* and it very much alarmed us to see it on the screen.

      • gom_jabbar 3 hours ago
        Made me think of Mark Fisher's Y2K Positive text:

        > At the Great Midnight at the century's end, signifying culture will flip over into a number-based counterculture, retroprocessing the last 100 years. Whether global disaster ensues or not, Y2K is a singularity for cybernetic culture. It's time to get Y2K positive.

        Mark Fisher (2004). Y2K Positive in Mute.

      • LocalPCGuy 4 hours ago
        While there was a lot of FUD in the media, there were also a lot of scenarios that were actually possible but were averted due to a LOT of work and attention ahead of time. It should be looked at, IMO, as a success of communication, warnings, and a lot of effort that nothing of major significance happened.
        • tejohnso 4 hours ago
          Yes, Y2K is a success story, similar to the alert and response related to ozone layer and CFCs.

          Dissimilar to the global climate catastrophe, unfortunately.

          ---

          The 2024 state of the climate report: Perilous times on planet Earth

          https://academic.oup.com/bioscience/article/74/12/812/780859...

          "Tragically, we are failing to avoid serious impacts"

          "We have now brought the planet into climatic conditions never witnessed by us or our prehistoric relatives within our genus, Homo"

          "Despite six IPCC reports, 28 COP meetings, hundreds of other reports, and tens of thousands of scientific papers, the world has made only very minor headway on climate change"

          "projections paint a bleak picture of the future, with many scientists envisioning widespread famines, conflicts, mass migration, and increasing extreme weather that will surpass anything witnessed thus far, posing catastrophic consequences for both humanity and the biosphere"

          • timschmidt 3 hours ago
            I don't mean to lessen the impact of that statement. I think climate change is a serious problem. But also most of the geologic time that genus Homo has existed, Earth has been in an ice age. Much of which we'd consider a "snowball Earth". The last warm interglacial period, the Eemian, was 120,000 years ago.
            • nkrisc 2 hours ago
              The genus Homo dates back nearly 2 million years.
            • john_strinlai 3 hours ago
              this is the same style comment as "no offense, but <offensive thing>"

              if you didnt intend to lessen the impact of that statement, why say something that is specifically meant to lessen the impact of the statement? just say what you want to say without the hedging.

            • philipwhiuk 3 hours ago
              What you just wrote is the same as: 'the entire lifecycle of humanity has no precursor to the conditions' we are about to face.

              We aren't facing the ice age that has been the last 120,000 years.

              I'm sure the rocky planet will survive just fine, maybe even some extreemophiles, even if we completely screw up the atmosphere. Not 6 billion humans though.

            • yfontana 3 hours ago
              [dead]
      • kjs3 2 hours ago
        Tell us you weren't involved in Y2K iwithout telling us you weren't involved in Y2K.
      • NetOpWibby 4 hours ago
        Exciting times with an anticlimactic end; I was in middle school, relishing the chaos of the adult world.
      • myself248 4 hours ago
        Another victim of the preparedness paradox.
  • harikb 57 minutes ago
    A comment from the PR

    > Not a serious problem, but the weekdays are wrong. For example, 18-Apr-2127 is a Friday, not Sunday.

    There is now many magical dates to remember - 2126 ( I think PR was updated after that comment) and 2177. There is also 2028 also somewhere.

  • samlinnfer 2 hours ago
    i had to plant a 10 year time bomb in our SAML SP certificate because AFAIK there is no other way to do it. It’s been 7 years since then. Dreading contacting all the IDPs and getting them to update the SAML config.
  • db48x 21 hours ago
    Classic!

    But before you judge the fix too hashly, I bet it’s just a quick and easy fix that will suffice while a proper fix (to avoid depending on external state) is written.

    • pavel_lishin 3 hours ago
      I'll bet you one US Dollar that this is a scenario where the temporary fix becomes the permanent one. (Well, at least, permanent for a hundred years.)

      Some day, Pham Nuwen is going to be bitching about this test suite between a pair of star systems.

      • db48x 2 hours ago
        That’s one of my favorite books :)

        I agree that it’s plausible!

    • em-bee 3 hours ago
      of course it is just an easy fix. it's the kind of solution that even someone like me could write who has no understanding of the code a all. (i am not trying to imply that the submitter of the PR doesn't understand the code, just that understanding it is unlikely to be necessary, thus the change bears no risk.

      but, the solution now hides the problem. if i wanted to get someone to solve the problem i'd set the new date in the near future until someone gets annoyed enough to fix it for real.

      and i have to ask, why is this a hardcoded date at all? why not "now plus one week"?

      • db48x 39 minutes ago
        There’s a lot to be said for simplicity. The more logic you put into handling the dates correctly in the tests, the more likely you are to mess up the tests themselves. These tests were easy to write, easy to review, easy to verify, and served perfectly well for 10 years.

        But doing it right shouldn’t be all that hard.

  • kristofferR 3 hours ago
    [flagged]
    • tomhow 2 hours ago
      Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

      https://news.ycombinator.com/newsguidelines.html

    • andai 3 hours ago
      It was started by people who thought Twitter didn't have enough censorship (back when it had a lot more).

      I guess that's a matter of personal sensibilities, but it's pretty funny to me.

      (Note: this is the only fact I know about it, happy to learn more.)

    • rirze 3 hours ago
      Any social space will break down upon reaching a critical point in representation of the general populace.

      I have no idea about the development however.

    • MBCook 2 hours ago
      Worked for me.
  • dhosek 42 minutes ago
    One of the comments:

    > Us, ten years after generating the certificate: "Who could have possibly foreseen that a computer science department would still be here ten years later."

    This was why there was a Y2K bug. Most of that code was written in the 80s, during the Reagan era. Nobody expected civilization to make it to the year 2000.

    • bombcar 40 minutes ago
      No, people thought that storing a year as two digits was fine because computers were advancing so fast that it was unlikely they'd still be used in the year 2000 - or if they were it was someone else's problem.

      And they were mostly right! Not many 80s machines were still being used in 1999, but lots of software that had roots to then was being used. Data formats and such have a tendency to stick around.

      • naikrovek 29 minutes ago
        Software has incredible inertia compared to hardware.

        It is effectively trivial to buy millions of dollars of hardware to upgrade your stuff when compared with paying for existing software to be rewritten for a new platform.

        • oasisaimlessly 13 minutes ago
          This is a very SWE-centric perspective. The very names of software/hardwsre would imply the exact opposite.