Meta plans to use AI to automate up to 90% of its privacy and integrity risk assessments, including sensitive areas like AI safety and youth risk, according to internal documents viewed by NPR, Bobby Allyn and Shannon Bond reported on Friday. Under the new system, Meta says product teams will be asked to fill out a questionnaire about their work, then will receive an “instant decision” with AI-identified risks, along with requirements that an update or feature must meet before it launches. Meta said in a statement that it has invested billions of dollars to support user privacy and that the product risk review changes are intended to streamline decision-making, adding that “human expertise” is still being used for “novel and complex issues,” and that only “low-risk decisions” are being automated.
Elevate Your Investing Strategy:
- Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence.
Published first on TheFly – the ultimate source for real-time, market-moving breaking financial news. Try Now>>
Read More on META:
- Meta Platforms Soars with AI Innovations and Defense Deal
- Meta Platforms Concludes Annual Shareholder Meeting Decisions
- Analysts Remain Bullish on Meta Platforms Stock Despite Near-Term Headwinds
- Now Streaming: Analysts raise price targets on Netflix
- Alphabet Stock (GOOGL) Suffers as Leading Investor Warns of AI Undercutting
