You know how to tell that it wasn’t?
It’s using careful hedging language — “could be used to attempt”, “have the potential to”, “more effective”.
AI would just plow through that shit, hallucinating facts like there is no tomorrow.
This is nonsense. Passwords might have an interesting distribution, key space is flat. There is nothing to learn.
And I hope you didn’t mean letting an LLM loose on, say, the AES circuit, and expecting it will figure something out.
encryption
You can train AI to crack encryption
Oh do provide details.
You know how to tell that it wasn’t?
It’s using careful hedging language — “could be used to attempt”, “have the potential to”, “more effective”.
AI would just plow through that shit, hallucinating facts like there is no tomorrow.