I'm curious whether you feel you're actually in control to actually make policy decisions about data protection or whether you feel you could be hit any day by the "$5 wrench" by the government any time they feel it necessary. I'm starting to feel that in this environment, nothing is safe, even if encrypted and on FOSS platforms.
Personally, I advocate for self-hosting communications software, ideally on physical hardware that someone in your community has control over. Zulip runs great on old laptops, if you can solve the IP address problem for hosting it in your house.
And if you want to be extra careful, put your chat system behind a VPN/firewall, so it's difficult to identify what software is being used externally.
And if you're not going to do that, because it sounds like too much work, the next best thing is to at least pick a Cloud service where you can migrate your group to paranoid self-hosting overnight if you decide the work is now worth it.
Self-hosting this way doesn't protect against all threat models. I am human and have children who I love dearly, so it's hard to rule out the possibility of my being compelled to make a malicious release.
But at least the Zulip source code is entirely open and highly readable; so users would at least have a chance to notice and not upgrade. With a centralized architecture like Discord, you're entirely reliant on whisteblowers.
I'm curious whether you feel you're actually in control to actually make policy decisions about data protection or whether you feel you could be hit any day by the "$5 wrench" by the government any time they feel it necessary. I'm starting to feel that in this environment, nothing is safe, even if encrypted and on FOSS platforms.