- Sometimes your test code expects time to be moving forward
- sometimes your code might store classes into a hashmap for caching, and the cache might be built before the freeze time class override kicks in
- sometimes it happens after you have patched the classes and now your cache is weirdly poisoned
- sometimes some serialization code really cares about the exact class used
- sometimes test code acts really weird if time stops moving forward (when people use freezetime frozen=true). Selenium timeouts never clearing was funny
- sometimes your code gets a hold of the unpatched date clsss through silliness but only in one spot
Fun times.
The nicest thing is being able to just pass in a “now” parameter in things that care about time.
- generating test data in a realistic way is often better then hard coding it (also makes it easier to add prop testing or similar)
- make the current time an input to you functions (i.e. the whole old prefer pure functions discussion). This isn't just making things more testable it also can matter to make sure: 1. one unit of logic sees the same time 2. avoid unneeded calls to `now()` (only rarely matters, but can matter)
One of the further benefits of .NET's TimeProvider is that it can also be provided to low level async methods like `await Task.Delay(time, timeProvider, cancellationToken)` which also increases the testability of general asynchronous code in a deterministic sandbox once you learn to pass TimeProvider to even low level calls that take an optional one.
[1] https://learn.microsoft.com/en-us/dotnet/standard/datetime/t...
That seems like a downgrade to me!
Not as convenient for unit tests cause you have to run the test with LD_PRELOAD.
It's just too easy to keep adding new feature flags and never removing them. Until one day the FF backend goes down and you have 300 FFs all evaluate to false.
I think it worked out really well even though it increased the administrative overhead. We were always able to quickly revert behavior without needing to push code and it let us gradually shrink a lot of the legacy features we had on the project.
Overall it's still net positive for me in certain cases of enforcing things to be temporary, or at least revisited.
https://www.digicert.com/blog/tls-certificate-lifetimes-will...
We experienced several of those over the years, and generally it was the test that was wrong, not the code it was testing.
For example, this simplified test hits several of those pitfalls:
var expected = start.AddMonths(1);
var actual = start.ToLocal().AddMonths(1).ToUtc();
Assert(expected == actual);Always include some randomness in test values.
then it actually is a huge success
because it found a bug you overlooked in both impl. and tests
at least iff we speak about unit tests
The whole concept of allowing a flaky unit test to exist is wild and dangerous to me. It makes a culture of ignoring real failures in what, should be, deterministic code.
So, yes, logging the inputs is extremely important. So is minimizing any IO dependency in your tests.
But then that runs against another important rule, that integration tests should test the entire system, IO included. So, your error handling must always log very clearly the cause of any IO error it finds.
If this isn't a joke, I'd be very interested in the reasoning behind that statement, and whether or not there are some qualifications on when it applies.
so if you generate test data randomly you have a higher chance of "accidentally" running into overlooked edge cases
you could say there is a "adding more random -> cost" ladder like
- no randomness, no cost, nothing gained
- a bit of randomness, very small cost, very rarely beneficial (<- doable in unit tests)
- (limited) prop testing, high cost (test runs multiple times with many random values), decent chance to find incorrect edge cases (<- can be barely doable in unit tests, if limited enough, often feature gates as too expensive)
- (full) prop testing/fuzzing, very very high cost, very high chance incorrect edge cases are found IFF the domain isn't too large (<- a full test run might need days to complete)
people often take flaky test way less serious then they should
I had multiple bigger production issues which had been caught by tests >1 month before they happened in production, but where written off as flaky tests (ironically this was also not related to any random test data but more load/race condition related things which failed when too many tests which created full separate tenants for isolation happened to run at the same time).
And in some CI environments flaky test are too painful, so using "actual" random data isn't viable and a fixed seed has to be used on CI (that is if you can, because too much libs/tools/etc. do not allow that). At least for "merge approval" runs. That many CI systems suck badly the moment you project and team size isn't around the size of a toy project doesn't help either.
So don't do that. That's bad practice. The test has failed for a reason and that needs to be handled.
Some tests can be at the mercy of details that are hard to control, e.g. thread scheduling, thermal-based CPU throttling, or memory pressure from other activity on the system
A test should communicate its reason for testing the subject, and when an input is generated or random, it clearly communicates that this test doesn't care about the specific _value_ of that input, it's focussed on something else.
This has other beneficial effects on test suites, especially as they change over the lifetime of their subjects:
* keeping test data isolated, avoiding coupling across tests * avoiding magic strings * and as mentioned in this thread, any "flakiness" is probably a signal of an edge-case that should be handled deterministically and * it's more fun [1]
If you test math_add(1,2) and it returns 3, you don't know if the code does `return 3` or `return x+y`.
It seems I might need to revise my view.
jitter = random(5)
assertEqual(3 + jitter, math_add(1, 2 + jitter))
If it was math_multiply(), then adding the jitter would fail - that would have to be multiplied in.Nowadays I think this would be done with fuzzing/constraint tests, where you define "this relation must hold true" in a more structured way so the framework can choose random values, test more at once, and give better failure messages.
Damn, must be why only white hair is growing on my head now.
>Nowadays I think this would be done with fuzzing/constraint tests, where you define "this relation must hold true" in a more structured way so the framework can choose random values, test more at once, and give better failure messages.
So the concept of random is still there but expressed differently ? (= Am I partially right ?)
Here's an example with a python library: https://hypothesis.readthedocs.io/en/latest/tutorial/introdu...
The strategy "st.lists(st.integers())" generates a random list of integers that get passed into the test function.
And also this page says by default tests would be run (up to) 100 times: https://hypothesis.readthedocs.io/en/latest/tutorial/setting...
So I'm thinking... (not tested)
@given(st.integers(), st.integers())
def test_math_add(a, b):
assert a + b == math_add(a, b)
...which is of course a little silly, but math_add() is a bit of a silly function anyway.- Test 1 -> set data_1 with value 1
- Test 1 -> `do some magic`
- Test 1 -> assert value 1 + magic = expected value
- Test 2 -> set data_1 with value 2
But this can fail if `do some magic` is slow and Test 2 starts before Test 1 asserts.
So I can either stop parallelism, but in real life parallelism exists, or ensure that each test as random id, just like it would happen in real life.
doing random things to hopefully get a failure is fine if there's an actual purpose to it, but putting random values all over the place in the hopes it reveals a problem in your CI pipeline or something seems like a real weak reason to do it.
An impossibly short period of time after the heat death of the universe on a system that shouldn’t even exist: ERROR TIME_TEST FAILURE
"End of Unix time" is under 12 years now, so, a bit longer than the time frame of this test, but we're coming up on it.
"More than 2 million years seems to be enough for us to not be around any more when the bug reports start appearing."
Sadly, I don't recall which game it was. Maybe SpaceChem?
If you're going to kick the can down the road, why not kick it pretty far?
And to your point, Y2K is right there on the wiki page for it.
I can tell you anecdotally that on 12/31/2000 I was hanging with some friends. At 12PM UTC we turned on the footage from London. At first it appeared to be a fiery hellscape armageddon. while it turned out to just be fireworks with a wierd camera angle, there was a moment where we were concerned something was actually happening. Most of us in the room were technologists, and while we figured it'd all be no big deal, we weren't *sure* and it very much alarmed us to see it on the screen.
Dissimilar to the global climate catastrophe, unfortunately.
---
The 2024 state of the climate report: Perilous times on planet Earth
https://academic.oup.com/bioscience/article/74/12/812/780859...
"Tragically, we are failing to avoid serious impacts"
"We have now brought the planet into climatic conditions never witnessed by us or our prehistoric relatives within our genus, Homo"
"Despite six IPCC reports, 28 COP meetings, hundreds of other reports, and tens of thousands of scientific papers, the world has made only very minor headway on climate change"
"projections paint a bleak picture of the future, with many scientists envisioning widespread famines, conflicts, mass migration, and increasing extreme weather that will surpass anything witnessed thus far, posing catastrophic consequences for both humanity and the biosphere"
To me, it seems to make it even more significant. Because as you point out, Homo evolved under ice age conditions over millions of years. Well, here we are about to be thrust into uncharted territory, in an extremely short period of time. With very fragile global interdependencies, an overpopulated planet, and billions of people exposed to the consequences.
Earth has certainly thrived with a warmer climate. No reason we can't too. The problems - for us and other life - stem from the rate of change. Which is easy to see is very very rapid compared to the historical cycles, but still a slow motion trainwreck compared to an asteroid strike, supervolcano, or gamma ray pulse, all of which it seems Earth has experienced. Life and human society will adapt if it has enough time. The quicker the catastrophe the more challenging that is.
I guess what I'm saying is that we're not doing ourselves any favors, but we also shouldn't underestimate mother nature's ability to throw us a curve ball in the 9th inning that makes everything worse. Life has endured an awful lot on this little rock.
We aren't facing the ice age that has been the last 120,000 years.
I'm sure the rocky planet will survive just fine, maybe even some extreemophiles, even if we completely screw up the atmosphere. Not 6 billion humans though.
Sometimes a great deal so. Sometimes less. But nearly always below average. For our whole existence.
That's why the choice of wording struck me.
You can zoom out a bit more and it just gets clearer: https://en.wikipedia.org/wiki/Geologic_temperature_record#/m...
Further out and we're still one of the coldest periods: https://en.wikipedia.org/wiki/Geologic_temperature_record#/m...
We're ice-age dwellers. Always have been.
I can both be alarmed at how quickly the ice age humanity has evolved within is ending, and find that a very funny way of phrasing it. These things don't conflict in me, though it seems triggering to some. People are downvoting me with moral conscience, but I'm just over here laughing at a funny conjunction of paleoclimate and word choice. :) People getting offended by it kinda makes it funnier.
if you didnt intend to lessen the impact of that statement, why say something that is specifically meant to lessen the impact of the statement? just say what you want to say without the hedging.
> At the Great Midnight at the century's end, signifying culture will flip over into a number-based counterculture, retroprocessing the last 100 years. Whether global disaster ensues or not, Y2K is a singularity for cybernetic culture. It's time to get Y2K positive.
Mark Fisher (2004). Y2K Positive in Mute.
// By the time this fails, I should be sipping pina coladas on the beach.
Alas, he was still working, albeit at another firm.But before you judge the fix too hashly, I bet it’s just a quick and easy fix that will suffice while a proper fix (to avoid depending on external state) is written.
Some day, Pham Nuwen is going to be bitching about this test suite between a pair of star systems.
I agree that it’s plausible!
but, the solution now hides the problem. if i wanted to get someone to solve the problem i'd set the new date in the near future until someone gets annoyed enough to fix it for real.
and i have to ask, why is this a hardcoded date at all? why not "now plus one week"?
But doing it right shouldn’t be all that hard.
> Us, ten years after generating the certificate: "Who could have possibly foreseen that a computer science department would still be here ten years later."
This was why there was a Y2K bug. Most of that code was written in the 80s, during the Reagan era. Nobody expected civilization to make it to the year 2000.
And they were mostly right! Not many 80s machines were still being used in 1999, but lots of software that had roots to then was being used. Data formats and such have a tendency to stick around.
It is effectively trivial to buy millions of dollars of hardware to upgrade your stuff when compared with paying for existing software to be rewritten for a new platform.
Or better, its drivers run in what Windows version?
> Not a serious problem, but the weekdays are wrong. For example, 18-Apr-2127 is a Friday, not Sunday.
There is now many magical dates to remember - 2126 ( I think PR was updated after that comment) and 2177. There is also 2028 also somewhere.
I guess that's a matter of personal sensibilities, but it's pretty funny to me.
(Note: this is the only fact I know about it, happy to learn more.)
I have no idea about the development however.