For years, social networking platforms have relied on human review and reactive enforcement. But these moderation systems can no longer keep up with the surging and increasingly sophisticated instances of fraud, harassment, misinformation, and digital abuse. This realization made the CEO of Soul App, Zhang Lu, turn towards technology to curb the chaos.
As one of the most popular social platforms in China, the company is keenly aware of the importance of accountability, safety, and governance in quelling user skepticism. Unlike many other industry players, Soul does not treat these factors as peripheral growth strategies. Instead, these are viewed as integral to creating long-term legitimacy.
As such, Soul Zhang Lu’s cyber security team has aligned artificial intelligence, policy compliance, and user protection into a cohesive governance framework. The recently released “2025 Annual Ecosystem Safety Report” offers insight into the safety strategies and tools used by the company.
For starters, Soul does not treat safety compliance as a constraint. Instead, it is viewed and implemented as a design principle. Hence, safety systems are presented not as external controls imposed by law, but as internal mechanisms essential to the platform’s social contract with users. This matters because it is a clear signal of the maturation of platform governance, and it is an acknowledgement of the fact that trust can only be built through demonstrable systems and outcomes.
Soul Zhang Lu’s report for 2025 once again highlighted how the company continues to use AI extensively to create a safer and positive environment for its users. Because manual moderation cannot realistically monitor millions of interactions in real time, the company uses AI as a compliance engine to address this gap.
At this time, the platform has seven coordinated AI models in place that operate continuously across content creation, interaction, and workflow reporting. These systems analyze language patterns, behavioral anomalies, and content provenance to identify potential violations before they escalate.
In terms of governance, this approach prioritizes “reasonable prevention” over post-incident responses. This proactive posture is the need of the hour since intent and process are just as crucial as outcomes.
Soul Zhang Lu’s team reported that the seven models are directly responsible for the year-over-year decline in scam-related activity on the platform. In addition to internal efforts, Soul also places emphasis on coordination with external authorities.
Over the last few years, the platform has actively collaborated with law enforcement agencies and anti-fraud institutions, sharing actionable intelligence and supporting investigations. This partnership reflects Soul Zhang Lu’s recognition of the fact that platform safety cannot be siloed.
After all, digital harm often extends beyond app boundaries, manifesting in financial loss, psychological distress, or real-world crime. The only way to tackle the serious repercussions of these risks is through external enforcement mechanisms.
Soul also has a unique take on moderation that does not focus heavily on content takedown and account bans. The app has opted for a more nuanced approach that frames moderation as a form of risk management rather than punishment.
In keeping with this strategy, Soul Zhang Lu’s team employs AI-driven sentiment analysis to monitor conversational dynamics and identify early signs of escalation. So, the app does not immediately restrict users. Instead, the system often intervenes with prompts to encourage respectful communication. According to the report, these interventions occur at scale, shaping user behavior through subtle design rather than overt enforcement.
This helps the app to strike the perfect balance between the principles of proportionality and harm prevention. So, the likelihood of abuse is reduced while preserving freedom of expression. In addition to the use of state-of-the-art technology and collaboration with external parties, Soul Zhang Lu’s team has turned platform security into a shared responsibility through community governance.
Tens of thousands of Soul’s users now have formalized roles that enable them to contribute to content review and policy enforcement through structured mechanisms. In essence, this distributed governance model introduces a layer of human judgment that complements automated systems. Since users are no longer passive recipients of platform rules, this hybrid strategy helps to build trust.
Soul Zhang Lu’s report also spoke about how the platform treats youth protection as a regulatory priority. To safeguard the interests of minors, Soul uses AI behavioral analysis to enhance age verification and prevent impersonation.
But as far as the extent to which minors are exposed to the app’s content and features is concerned, this, again, has been handed over to community governance. In effect, this combined approach supplements traditional identity checks and addresses scenarios where formal verification alone is insufficient.
All in all, Soul Zhang Lu’s team has shown that it is possible to integrate AI, policy alignment, and community participation into a coherent governance strategy to create definitive solutions that can tackle the challenges that social platforms face today. It also points towards a future in which compliance, community, and innovation work together to create a healthy, safe, positive, and democratic digital environment in which governance is of the users, for the users, and by the users.
