News

Grammarly Shuts Down ‘Expert Review’ AI After Backlash Over Fake Author Feedback

Grammarly Pulls AI Tool That Mimicked Feedback From Real Writers After Public Criticism

Written By : Antara
Reviewed By : Manisha Sharma

Grammarly has disabled its ‘Expert Review’ AI feature after receiving massive backlash from writers and journalists who said the tool generated feedback that appeared to come from real authors without their consent. The feature provided users with writing advice as if it were coming from well-known experts, quickly raising concerns about misrepresentation and identity misuse.

The company confirmed that the feature has been taken down after facing growing backlash online. Critics argued that the feature could mislead users by making AI-generated feedback appear to have been written by real professionals. Grammarly said it is reviewing the feature's design and considering changes to ensure greater transparency in the future.

Grammarly Expert Review Feature and Why It Was Withdrawn

The ‘Expert Review’ feature was created as a writing assistant. It used to provide feedback from renowned writers, journalists, and subject experts. Users of the system could choose an expert profile to receive writing suggestions that simulate comments from those professionals. 

The implementation of the feature immediately caused trouble. The system had produced its complete evaluation using artificial intelligence without any human expert input. Critics have complained that it has shown real writers' names without their consent. 

Grammarly initially tried to address the issue by offering users the option to withdraw. However, critics still highlighted that when their identity is used, the platform must obtain consent. The company chose to turn off the feature because it planned to create new regulations for AI tools.

Authors Criticize AI for Generating “Fake” Expert Feedback

Writers and journalists strongly criticized the feature after they found out that the AI-generated comments were being presented as if they came from them. Many users who tested it claimed that this feature would mislead them about the real source. 

Some authors publicly expressed concern that the feature used their professional identity to promote a commercial product. Critics also warned that these tools could blur the line between genuine expertise and automated suggestions.

The backlash spread through social media and professional circles, which created new debates about the need for consent and attribution in AI systems.

Also Read: Generative AI Rollout Exposes Hidden Risk in Google Cloud API Keys

Risks of AI Imitating Human Voices

The controversy surrounding Grammarly's tool demonstrates an underlying problem. This issue applies to all generative AI technologies. The AI system showcased its ability to evaluate writing styles. However, it raises ethical concerns when it presents the findings as if they were from real people.

The use of AI tools that mimic specific human vocal patterns without established limits threatens to erode public confidence. The development of AI assistants requires companies to implement stricter regulations that cover consent processes, attribution methods, and transparency standards to prevent future controversies.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

Hyperliquid Oil Trades Top $991M as Crypto Eyes Crude Swings

Crypto News Today: Elon Musk Confirms Bitcoin, Ether, and Dogecoin as His Crypto Holdings

Ripple Share Buyback Lifts Valuation Near $50B Amid Expansion

Dogecoin Price Prediction: Triangle Breakdown Forecasts 37% Decline

Florida Stablecoin Bill Signals New Rules for Crypto Payment Issuer