Instagram is to employ AI to ensure it remains “a safe place” by filtering out abusive comments and spam.
The company said many users have noted that “toxic comments” discourage them from posting to the platform, and as such it has introduced two new tools to protect the community-feel of the app, including a feature which will identify and block certain offensive comments.
An opt-in anti-abuse feature is being rolled out to Instagram users from today, allowing them to automatically hide comments around posts and live videos that may be offensive.
The tool is powered by machine Read full story ›
Source: The Drum