Alia Al Ghussain, a researcher and advisor with Amnesty Tech, talks about the huge need to centre human rights in the technology sector.
"This is a bad situation and it’s primed to get even worse,” Alia Al Ghussain says, on the back of the move by Meta earlier this year, to end its practice in the US of fact-checking information posted on its platform. The development is seen as a backward step to curry favour with the Trump administration, according to Alia:
[Meta's move] marks a very clear retreat from the company’s previously stated commitments to responsible content governance and I think it also shows that Meta hasn’t learned from its previous recklessness, or if it has learned anything those learnings have been discarded, basically.”
Amnesty Tech has previously investigated and released reports on the role that various tech companies have had in human rights abuses. One of the major reports that first shed light on the issue – that of digital players now needing to own responsibility – was its report into the persecution of the Rohingya Muslim community in Myanmar, looking into what role Meta played; how hate speech on its platform promoted violence against the Rohingya. The report found that, “Meta’s algorithms proactively amplified and promoted content which incited violence, hatred, and discrimination against the Rohingya – pouring fuel on the fire of long-standing discrimination and substantially increasing the risk of an outbreak of mass violence.” And it concluded that: “Meta substantially contributed to adverse human rights impacts suffered by the Rohingya and has a responsibility to provide survivors with an effective remedy.” Another major report Alia was involved in looked at Facebook’s role in contributing to violence during the brutal two-year conflict in Ethiopia’s northern Tigray region, which began in 2020 when the Ethiopian government began military operations there against the region’s ruling party.
Amnesty concluded:
Meta, once again – through its content-shaping algorithms and data-hungry business model – contributed to serious human rights abuses.”
In this episode, Alia explains how big tech players make their money from users’ data and discusses the harmful impacts of algorithmically curated content. She also shares her knowledge on how content goes viral, emphasising that it’s not necessarily that the content is good, but rather that it elicits an emotional reaction amongst users and generates a lot of engagement.
Ultimately, Alia believes that digital platforms need to be redesigned with human rights at the centre, and she calls for more governance in this area:
I think that governments and some regional bodies, like the EU, really need to double down their efforts to rein in big tech companies, like Meta and others, and also to hold them accountable… because this is about people’s lives at the end of the day. This isn’t an intellectual debate.”
Presented and produced by Evelyn McClafferty
With thanks to our donors: Irish Aid.
Note: The views and opinions expressed in this episode do not necessarily represent those of IRLI or Irish Aid.