Elon Musk's AI model Grok will no longer be able to edit photos of real people to show them in revealing clothing in jurisdictions where it is illegal, after widespread concern over sexualised AI deepfakes.

We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis, reads an announcement on X, which operates the Grok AI tool.

The change was announced hours after California's top prosecutor said the state was probing the spread of sexualised AI deepfakes, including of children, generated by the AI model.

We now geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok on X in those jurisdictions where it's illegal, X said in a statement on Wednesday.

It also reiterated that only paid users will be able to edit images using Grok on its platform. This will add an extra layer of protection by helping to ensure that those who try and abuse Grok to violate the law or X's policies are held accountable, according to the statement.

With NSFW (not safe for work) settings enabled, Grok is supposed to allow upper body nudity of imaginary adult humans (not real ones) consistent with what can be seen in R-rated films, Musk wrote online on Wednesday. That is the de facto standard in America. This will vary in other regions according to the laws on a country by country basis, said the tech multi-billionaire.

Musk had earlier defended X, posting that critics just want to suppress free speech along with two AI-generated images of UK Prime Minister Sir Keir Starmer in a bikini.

In recent days, leaders around the world have criticised Grok's image editing feature.

Over the weekend, Malaysia and Indonesia became the first countries to ban the Grok AI tool after users said photos had been altered to create explicit images without consent.

Britain's media regulator, Ofcom, has stated it will investigate whether X failed to comply with UK law over sexual images. California Attorney General Rob Bonta has voiced concerns over the material generated, which depicts women and children in nude and sexually explicit situations.

Policy researcher Riana Pfefferkorn expressed surprise that X took so long to employ these safeguards, suggesting the features should've been removed sooner upon the emergence of abuse. Questions remain about how X will enforce its new policies and whether the AI can accurately determine if an image is of a real person.