Using LLMs to provide peer review defeats the purpose of peer review IMO, the point is to get expert human review of your work in the context of broader scientific knowledge. Also, LLMs aren't our peers. I'd be disappointed if a reviewer used an LLM to review my work (and alarmed if they uploaded my unpublished work).
That said, without it being blindingly obvious that an LLM was used (such as the review including text like "Yes I can do that, here's an improved version of your review"), I would be very hesitant to claim the reviewers used LLMs in their reviews. It feels difficult to prove, especially when the reviews are not independent - they are both reviews of the same piece of work. The intro text where they describe what your paper is about feels robotic but seems to be an academic norm. If the work does indeed have the issues identified by one reviewer, it shouldn't be surprising that the other reviewer also picks up on the same issues.
Whether the things like critiquing your work for issues that are either non-existent in it or already addressed by the text are the result of LLM usage or human error/laziness is also difficult to say. Humans make mistakes too and I find it more useful to be generous in your assumptions. Replying under the assumption that a mistake was made, it gives the reviewer/editor an 'out' and hopefully leads to a productive conversation where maybe you can get to your desired outcome.