I warned the project11 people that this would happen. That they'd be awarding the bitcoin to whoever best obfuscated that the quantum computer was not contributing (likely including the submitter fooling themselves). I guess they didn't take it to heart.
Judging by the fact the original code does more classical work than the prg solution, and in more practical terms, the fact it makes network calls, I'd say the quantum-integrated code is a lot slower for this set of problems.
src: https://github.com/GiancarloLelli/quantum/blob/7925f6ec5b57f...
Recovering a 17bit ecc key isn’t a challenge for current classical computers via brute force.
perfection
Scammers can take an old defunct coin or create a new one, buy up/create supply, strap ML-DSA on to it, and pump their shitcoin claiming it's quantum safe, then they can unload.
Eventually low information retail will get wise to this, I honestly don't know who this even works on right now.
If the quantum computer were a key component of the solution, replacing it with an RNG would have either no longer yielded the right result, or at least would have taken longer to converge to the right result. Instead, the author shows that it runs exactly the same, proving all of the relevant logic was in the classical side and the QC was only contributing noise.
If the results are statistically identical to guessing then it seems like you've just built a Rube Goldberg contraption.
weakened algorithms to the extreme (17 bits in 2026 LOL).
https://blog.google/innovation-and-ai/technology/research/qu...
At least for breaking crypto, which seems to be its headline feature. Maybe there are other useful things it can do?
----
The article itself is maddeningly vague on exactly what happened here.
At first blush, it looks like the quantum computer was just used to generate random noise? Which was then checked to see if it was the private key? Surely that can't be.
The github README [0] is quite extensive, and I'm not able to parse the particulars of all the sections myself without more research. One thing that caught my eye: "The key insight is that Shor's post-processing is robust to noise in a way that raw bitstring analysis is not."
"This result sits between the classical noise floor and the theoretical quantum advantage regime. At larger curve sizes where n >> shots, the noise baseline drops below 1% and any successful key recovery becomes strong evidence of quantum computation."
So... is one of the main assertions here simply that quantum noise fed into Shor's algorithm results in requiring meaningfully fewer "shots" (this is the word used in the README) to find the secret?
Someone help me understand all this. Unless I'm missing something big, I'm not sure I'm ready to call this an advancement toward Q-Day in any real-world sense.
We are still doing science and engineering experiments, not making production anything.
QC relies on the observed output being statistically significant. This rebuttal is pointing out that Project Eleven only ran the algorithm once. At this point, there is no proof the IBM QC platform is generating anything statistically significant, especially more significant than the performance of feeding it /dev/urandom.
Basically, there is no proof this was real quantum computing instead of random noise picked up by the hardware inside the QC.
Now to show that the QC is doing anything against this rebuttal, they have it run it a significant number of times and show that it breaks the key a larger amount of times than feeding it a uniform distributed random noise source like /dev/urandom.