

That’s an apt example from English, especially given the visual similarity of the error.
It’s the kind of error we would expect AI to be especially resilient against, since the phrase “corner cube” probably appears many times in the training dataset.
Likewise scanning electron microscopes are common instruments in many schools and commercial labs, so an AI writing tool is likely to infer a correction needed given the close similarity.
Transcription errors by human authors, however, have been dutifully copied into future works since we began writing stuff down.
It’s semantics, but I think the person above is just pointing out that “AI” is an old umbrella term that refers to a lot of technologies that include previous current and future work, and shouldn’t necessarily be bound forever to one era’s misapprehension and misuse of a particular subset of those technologies.
Prior examples of AI included early work by Alan Turing. Current examples include tools that enable people with disabilities. Future examples might offer solutions to major problems we face as a society. It would be a shame if use of a term as a buzzword was all it took to kill a discipline.