Speaker's Conference (2024) — Oral Evidence (HC 570)
Good afternoon and welcome. For the record, will you introduce yourselves, please?
Hello. My name is Megan Thomas, and I am a public policy manager at Meta.
Hello, my name is Claire Dilé. I am a public policy director at X.
Hi. I am Patrícia Rossini. I am a senior lecturer in communication, media and democracy at the University of Glasgow.
That is great. Thank you. We will go straight into questions.
Thank you very much for coming in to give evidence. What do you consider the role of X, Facebook and Instagram to be in the political process? What responsibilities does that mean that you have, in your view?
Thank you so much for your question and inviting me to this Committee. X as a service is a real-time information service, and our mission is to ensure that freedom of expression and access to information are guaranteed to our users. It is important for our service that we promote freedom of speech and of expression as much as possible, as long as they abide by Ofcom rules. On politics, that of course is one of the main topics that people can discuss on our platform. They can discuss sport, entertainment, societal topics or other things that interest them. Politics is one of those. The trend that we can observe is that our society is becoming a bit more digital. In that sense, we see that politicians increasingly use social media, because it is an effective tool to reach a greater audience and so to communicate with our citizens. As I was saying, if you are, for example, a sportsman or celebrity, you might want to communicate with your audience on X, or as a football player you might want to communicate with your fans. As a politician, it is a way to communicate with your constituents and citizens and to get direct feedback. That is a trend that we are we are seeing on the platform, but it is not a trend particular to politics; we observe that in other aspects of society, so for us, that is how we see our role. We need to make sure that debate can happen in a safe and free way on our service.
Thank you very much for inviting me to give evidence to the Committee. We provide platforms for people to be able to connect with each other. Many use them to engage in debate and provide a space for conversations. Many people use our platforms to engage in elections or discussions, or for community organising. We strive to promote a space that encourages inclusive conversations and a respectful environment. Many MPs use our platforms in order to campaign around election time, and to engage with their constituents on political matters. We know that we have a responsibility to balance freedom of expression with the safety of our users.
I appreciate those answers, and I am glad that you mentioned responsibilities, because that clearly has to be part of it. Do the platforms that you represent wish to become more or less engaged in the political debate that we have here, and in the democratic process that we have in this country? Or are you content with the status quo?
For us, what is important is that people can use our service, and then they can decide what they want to use it for. Our goal is to make sure that people use it in a safe way, and that they are encouraged to participate. Being encouraged to participate is both being able to express your ideas and opinions, but also to feel safe about expressing your idea or opinion. It is very much that both-ways option, ensuring their safety and their ability to speak. We do not determine what people should discuss. They are free to discuss everything. We have many communities on our platform, and the only thing that we ask people to respect when they join the service is to respect the law and our community guidelines. But for us, we are very happy for them to come to X to discuss politics or any other topic. The nature of our platform being public is that it is obviously a good place to discuss public policy, or what is happening in your country, city or region, so quite naturally it is being used for that, but we do not have a particular position on what people should discuss.
That is similar for us. We know that many MPs use all our platforms in order to engage in political debate. I think that most MPs have a Facebook page, for example, and we think that is great. But we do know that we have a responsibility to balance freedom of expression with the safety of our users, including MPs, on the platform. That is why we have strict rules and policies in place in order to safeguard that experience.
You will therefore take note of, respond to and take very seriously the conclusions of this Conference, as we discuss the way in which social media companies are enabling people to undermine the democratic process, and to do so in a way that they could not do in the offline world. Do you accept that you would take that seriously if that were the conclusion of this Conference?
From a Meta perspective, we absolutely recognise the importance of the Conference’s work to date on this issue. We know that MPs can face unacceptable levels of abuse online and offline, and that is exactly why we have introduced a package of measures in order to best support MPs and public figures more generally on the platform. Perhaps it will be helpful to the Committee for me to take a couple of minutes to outline what that is. First, we have our community standards, which are the rules that we design. Those determine what type of content is and is not allowed on the platform. We design those rules in partnership with a range of experts, from human rights organisations and charities to researchers and others, and we increasingly use a combination of automation and human reviewers to find and remove the content that breaches our community standards. We are getting better at that and at doing it automatically. We document our efforts in our quarterly transparency reports, which are externally audited by EY, and I can go into that in a bit more detail in due course. We also have a range of partnerships in place. One of our most important for the conversation today is a dedicated escalation route set up between SIRAS, which is the parliamentary online security team, and our trusted internal reviewers—they have trusted flagger safety status, which means that things are automatically sent to that team. We also work closely with the political parties and parliamentary protection police. We have introduced tools to help people to manage their experience and, finally, we run training sessions and have resource guides. We hope that together—from automated removal to the training—that package of measures can helpfully best support MPs’ experience on our platforms.
On the responsibilities that you have both described, do you take the view—from your platforms’ perspectives—that those responsibilities are any different during election periods? If so, how does your behaviour change either in relation to the resources you allocate or in any other way?
That is a good question. We have a responsibility at all times. I think it is very important that people who use the platform are not victims of abuse. We have mechanisms in place to make sure that this is not the case. We also have collaboration in place. For instance, we co-operate with regulators and the Government but also with law enforcement. Whenever there is an abuse, we encourage people, beyond the moderation action that we are taking, to go to law enforcement and report it, because we will co-operate with them. We answer to law enforcement requests, and we communicate to them information that can help them and that goes back to the perpetrator. We also work with civil society organisations to understand how they evaluate the threats. They are a trusted partner of ours, so we take their response very seriously. In terms of elections, we know it is a very sensitive democratic moment, because it can determine the future of the country. There are a couple of things that we are putting in place. First, before the election, we assess the threat profile for the election based on a variety of signals that we observe on the platform in a particular country. For instance, if we observe certain variations occurring on the platform, we will allocate a different risk profile. But also, if we observe interference from a foreign country, for instance, or a malicious actor, that is something that we would factor in as well. Once we have the threat profile, we allocate matching resourcing to the profile. In this situation we would conduct more proactive sweeps of the platforms. Proactive sweeps mean that we would be proactively searching for potential violations on the platform to ensure that they remain safe. Another thing that is very important is that during election time we activate a particular policy—our civic integrity policy. That means that we remove any election-related misinformation. I will give you an example: if we found content on the platform that misleads people about the date or the conditions to vote in the election, we would mark it very clearly so that people are not being misled. The objective is to prevent vote suppression and discouraging people from going to vote. It encourages civic engagement. We have in place a particular escalation channel for candidate abuse. We work with the different political formations, and we provide them with a specific address to report violations to us to make sure that candidates remain safe during the election. This would be the case for any type of abuse, but also, for instance, any risk of impersonation—someone pretending to be a candidate. We do training on our safety features that are available to users. Another set of processes that we have put in place is that we work with partners. In the last general election in the UK, we worked with an NGO called Shout Out UK on a media literacy campaign in co-operation with Ofcom, to elevate content on the platform that would encourage people to go to vote. We conducted a few election reminder prompts and search prompts about the election, and also a media literacy campaign. The idea was to try to use our service to encourage people to vote and remind them that the vote is going on. The work is on both sides: on the safety side of things and on the trying to promote civic engagement side of things. That is how we work in election time.
Some of the measures that we put in place are quite similar to what Claire from X has outlined. As a baseline, we have 40,000 people at the company working on safety and security. It is an area that we have invested $30 billion in over the last 10 years. But knowing that during election times there is a heightened critical moment, we put in a layer of additional resources on top of the teams—the 15,000 content reviewers, the 40,000 people in general, and the investment that we make. We establish an election centre around election time. We get a team from across the company—from legal, policy, engineering and operations—for round-the-clock monitoring of content on the platform. They do proactive sweeps, for example, on candidate accounts to identify bullying and harassment, but also areas such as impersonation. We have strict policies that are bespoke for elections against areas like voter suppression and election misinformation. We remain in regular conversation during that time with the regulators—for example, with the UK Electoral Commission and Ofcom—on what we are setting up during that time. We also have civic participation products that we put on the platform—for example, “I voted” labels and things like that.
I have heard all that; however, when we did a survey of MPs, it was clear by quite a long way that they thought that they had more abuse on X and Facebook than on other social media such as Instagram, for instance. Of the people who responded to the survey, 80% had experienced abuse or threatening behaviour on X, and 77% had on Facebook. By contrast, it was only 15% on Instagram. What do you think of that finding, how do you react to it, and why is Instagram different? That is one of yours I know, Megan.
I am happy to go first on that. Across any platform we think that the abuse and harassment directed towards MPs is unacceptable. To directly answer your question on why there may be a disparity, it could potentially be something to do with Facebook’s scale. It is by far the largest social media platform, with 3 billion monthly active users. We also see that it is a place where people tend to engage more in long-form discussion and political conversations, whereas Instagram tends to be more of a space for people to engage with influencers and creators, and engage in lifestyle conversations. The work that the Conference is doing is really important, to point out the findings. We have our community standards enforcement reports, the transparency reports that we publish on a quarterly basis; those reports detail the amount of content that we find and remove proactively before anyone has to report it to us, and they show the prevalence of certain categories of harmful content. If I take hate speech as an example, from what we see it is very similar across both Instagram and Facebook. That is not to diminish the experience of those in the room, and MPs in general, but I think it may point to the fact that MPs have a different experience, and it is not the same experience for other people. We absolutely recognise that MPs have a unique experience across our services, and that is exactly why we have implemented a range of measures, including the dedicated escalation channel.
May I ask Claire that question? Then, Patrícia, I would be grateful if you would comment on what they have said.
As a first thing, there is always room for improvement for us. That is something that we work on, and we have the responsibility to work on: to improve on detection, especially on the proactiveness of detection. That is something that we are actively working on, as well as working on how content that has been surfaced to a human agent, which is being reported on the platform, can be actioned as quickly as possible, because that is where we will make progress. On why you potentially see more abuse on our platform, it is hard to point to a scientific explanation, but I can talk about one aspect. We are a public platform—obviously, X is a global conversation. I think that in a platform being public, perhaps there is also a little bit more risk of having that type of content. That is something that we also have to factor into our risk profile, and something that we discuss with Ofcom under the OSA. To perhaps remind the Committee—I know that you know this as well, and it is similar to what Megan from Meta was saying—we know that public figures, in particular, are more exposed to abusive content. Improving on that is and should continue to be a workstream for us. But I just want to say that this type of behaviour is, of course, forbidden on the platform: any harassment, violent speech, or hateful content has no place on X. Then, to put it in perspective for you, one violation is already one violation too many, but it is still a very small fraction of the overall conversation that you see on the platform. For instance, on X, you have 500 million tweets—sorry, posts—every second[1] on the platform, globally. That is really a lot. When we did our last transparency report, for the last semester of 2024, we saw that the violation rate for all our policies was 0.017% of all the posts that you see on X. Of course, it is good that it is a small number. It is always too much, but, to put it in perspective, the amount of posts that are in violation remains quite a small fraction of all the posts that you see on the platform.
Patrícia, would you like to comment on that?
From a different perspective—not from a platform perspective—we need to think about platform affordances that do not just facilitate this kind of abuse flourishing and continuing to happen, but drive further engagement. You are talking about platforms that are mainly textually based. You are talking about text, versus Instagram, which is more of a visual platform, or others, such as TikTok, which are also visual. That also determines the prominence of comments in relation to the rest of the content you see. If you are on X, you are really just seeing text, text, text, with a random video or image. Whereas on Facebook, if comments get a lot of engagement, they also potentially become more visible. From empirical research, we know a couple of things. We know that when you have a lot of abusive content, or even uncivil content, that tends to breed more abuse and more incivility. So the visibility of abuse—especially in textually based platforms—makes these things more prominent. It might mean that as people see it, they think it is acceptable and just how we say things these days, and hence they become more and more abusive. There is also something else to do not just with visibility, but with the extent to which this kind of engagement in comments, replies and so on can in turn drive algorithmic amplification. We know that platforms have to organise content. The ways in which they do that are really not transparent to people like me—or, I am sure, to people like you—but we do know that engagement with content matters, right? I do not know the extent to which positive versus negative engagement matters. There is some suggestion that negative engagement matters a lot. So it is possible that you have this toxic spiral of hate whereby abusive comments attract more abusive comments. These things are all more visible—because we are talking about a textual platform—and hence more people are being exposed to that content. That is making it quite unmanageable, affecting not just the targets but bystanders to all that abuse. I would like to hear more about these dynamics and how they might influence content visibility. I think that probably does explain why, especially for those suffering from online abuse and harassment, it starts with one comment, and then it is 10 and then 100, and then it becomes completely unmanageable. Part of it has to do with algorithmic amplification, which I would like to hear more about.
I hear about all these good things that you are doing. The fact is that the abuse is getting worse and the hate is getting worse. Particularly since the change of ownership with X, I would argue that in some circumstances whipping up the hate that is out there is actively encouraged.
Sorry, what is your question?
I am asking you to comment. You have said that you are doing all these different things to make it not happen, but the fact is that it is getting worse. My argument is that whatever you are doing, it is not working. I question whether, particularly with X since the change of ownership, in some circumstances it is actively encouraged and whipped up.
If you look at the transparency report—we are publishing our own global transparency report every six months—we report on the amount of content on which there would be action under different policies. When you think about hate, it would essentially be captured by three policies. It would be hateful conduct, which is to attack people on the basis of protected categories—that could be, for instance, ethnicity, gender, sexual orientation or religion. It could be under violent speech—inciting violence against a specific person or having a violent discourse against that person. It could also be under abuse and harassment, which is trying to shame, harass and degrade others. Those three policies could encapsulate hateful behaviour. If you look at the report, you will see that these are policies where we have quite high enforcement actions. We keep figures about the amount of content we action under those policies, and they are still quite high—it is thousands, if not millions. We have not observed any particular decreasing or increasing pattern. In any case, it is also important to keep in mind what is happening in real life. Eventually, the figures tend to evolve. In terms of what we are doing as a platform—this is something that is continuing to happen gradually; there has not been much change—we are working on developing our safety response to this phenomenon. Notably, we are investing a lot more in machine learning, for instance, and in how to work with AI to make sure that we are capturing this content before it is even shown to users. So when you act proactively, this content will not even be on the platform. That is quite important because then people would not see it. Of course, when you talk about hateful content, it is in text, and there is a dimension of, “What is the context around what people post? What did they want to say?” On that, we are also relying on human agents to review this content. What happens is that this content is being surfaced to human reviewers who would review this content, or it would be on the basis of people reporting it. So we are trying to work on these different parts of our response to try to remove content more quickly and more efficiently, and that is something that we continue to be committed to. It is always difficult for us to hear that that is not the experience that you are having. It means that we have to continue communicating about what we are doing, and that we have to continue improving. Hopefully, with some time, you will have a better experience and it will reflect more the experience that people have on the platform. But it is indeed our objective to get better on catching this content.
What I would agree with Mark on is that your tolerance levels are greater than they used to be. Security-wise, under your old name, the House would come to you and we would have 95% to 96% of complaints taken down. Now, we are down to 45%. So either your tolerance levels have changed, which would be the suggestion that I would get from that—or is there something I am missing? That is the difference with which we are having to try to operate with you as a company. The only thing that has changed is the ownership, and the name. What was previously taken down is no longer taken down.
I think what I can speak to—what I was telling you—is that we have a set of public policies, which are accessible online, where anything that would correspond to violent speech on the platform, hate on the platform or a hateful and violent entity would be even an account-level policy breach, so we would take the account down on the basis of that. With any form of abuse and harassment, of course, we would take into consideration co-ordinated or targeted harassment. That is something that we do not allow, so when we get a user report or a law-enforcement report, or when we detect it, we take that down. Internally, in terms of the figures we publish, we do not see that there is a decrease. It is quite consistent; it is quite stable. We are always happy to review what you observe, and if you have any evidence or examples that you want to share with us after this Committee, we can absolutely have a look and get back to you. I want to reiterate that the safety of our users is very important. Like I said at the beginning, if people do not feel that they are encouraged to participate in the conversation, then they will not express themselves, and that is absolutely not our objective because then the conversation has no value. There is just no interest for X to have a conversation that has no value. Like I was saying, this is something that we continue to invest in, and there is no objective from the company that people are less safe when they use the service. On the very contrary, we are hoping that people will feel safer, and that content is detected and removed more quickly when it has to be removed.
I would just say that obviously the bar is a lot higher than it used to be, and that is the difference.
Very quickly, have you ever taken down anything that Elon Musk has said?
What is for sure is that the rule of the platform will apply the same way for him as—
No, that wasn’t what I asked—sorry. I just asked a very simple question: have you ever taken down anything that Elon Musk has said?
I do not have that information for the Committee, but very certainly we did, if he crossed the line, like he probably did, as any other user, because he is not like—
Wherever the line is!
No, but this is a very serious question that you are asking, so I am going to answer you very seriously. There is no user that is above the rule of the platform, even Elon Musk or anyone else.
Can I ask what X’s definition of hate is? I have long since stopped reporting comments that I receive on X. I have received death threats that have been investigated by the police but not removed by the platform. I have had numerous examples of comments that have not been removed, where individuals have been prosecuted in a court of law in this country under the Malicious Communications Act. Is there any correlation between UK law and what X decides to remove from its platform? I receive quite a lot of abuse. Here is one from this morning: “Are you white?? No ! Then you have no say in what goes on in England - in fact get the fuck out. We don’t want immigrants”. Here is one from earlier today: “There is nothing wrong with being racist. You need to go back to Africa. Black people aren’t worthy of the UK.” “Leave, while we still let you. You grimy coloniser.” “When are you going to be deported back to your own country?foreigners should be banned from British politics. British politics for British people”. “You just got owned Uncle Tom. Fuck me, I’m blacker than you.” “Take the hint and return to your ethnic homelands. You’re not British. You will never be British. Go home.” I would argue that if I report all of those comments, not one of them would be taken down by X. What guarantees could you offer me that there can be confidence among MPs, particularly MPs who suffer an enormous amount of racist abuse on your platform, which is not policed very effectively, that if you report those comments, they will be removed? If they aren’t removed, what justification would you give, as the Government Affairs Director, as to why they were not removed?
I—
As an addition to that, even if those comments are not reported, what filter do you have to identify them? Why should this be the responsibility of the MP?
It is a very good question. First of all, I am sorry you had this experience. Those comments are absolutely abhorrent and obviously have no place on the platform. In terms of the way we would work—you mentioned the illegality of the comments—when we get a report from law enforcement in the UK, we have a specific team who would look at that report under the locally applicable law. They would also take action on the basis of local law, which is important. Any illegal content on the platform by the law of the land has to be applied as such—
On that, whenever I have reported comments to the police, when they have investigated them, they have submitted an Optica request to X and X has refused to provide the identity of the individuals who have made those comments. Could you also speak to that point? Why do you refuse to co-operate with law enforcement?
There are two aspects to this co-operation. First, we answer different types of requests. The first is removal requests—for instance, on a specific comment. If it is illegal, we would remove it in the specific country because it is illegal in that country. Then, we answer information requests, which is when we co-operate with the police for them to find the person and prosecute, or potentially prevent something bad from happening.
You don’t provide that information. You refuse to.
Sorry—let me finish my stream of thought. Then there are preservation requests. We can preserve information for the purpose of the investigation that the police are conducting. In any of those cases, it is a dialogue with the officer. I cannot speak to the specific cases, because I do not know about them, but it is always about the matter of being able to provide more context—for instance, about a specific individual that they are investigating. For us to be able to disclose information, we are in need of evidence. What I am trying to say is that sometimes a first report or a first contact might not be enough, and there is a need for more context.
What is the threshold for evidence? I have just read out a load of tweets. What is your threshold for evidence in order for the police to be given the details of the identity of the individual who has made those comments?
I think it is more about having more context than really having a threshold—
What does that mean?
It is about trying to explain about a certain threat, for instance, in real life, about a person. I am giving you a general example. I am not talking to the specifics.
I have asked you a very specific question, which you don’t seem to be able to answer. What is the threshold for X to be able to provide information about an individual who has made an illegal racist comment, in response to a request from the police? This is not from an individual. It is from the police.
To put it another way, what is the context that would make those statements acceptable to you?
I think perhaps it would be useful for us to check on the specific cases and come back to you with an informed answer. If I am answering you right now, it is not an informed answer, and so I don’t think that is useful for you. I do not work in this team—there is an expert team in the company that works on this.
This is very important and very concerning, because if somehow the law is being broken and you will not give the information to a law agency, that is a problem. I do not want you to answer now; I think that would be unfair. I would like you to take this away and inform us on the questions that you are unable to answer at the moment because you are only generalising. That would help us going forward.
My colleague asked if the owner of your company had ever had any of his comments taken down. He called one of our colleagues a “rape genocide apologist” and said that she should be jailed. That led to her safety, which my colleague knows about directly, being extremely compromised, but somehow that is not a justification for removing that message. I find that very difficult to accept, particularly in the context that, as Mr Speaker says, the bar at which you will come back to us has been raised at least twice because there is no response. Ben has just described to you what happened to him today—it must happen every day. You asked for context and examples. You have been given some examples of particular posts. Would you like to speak about them?
Thank you for that question. It gives me the occasion to explain myself a bit better—maybe I was not being clear. What I was saying is that in some situations, we might need more context from the police officer. The specific cases that you mentioned are bad comments—we all agree. I am not part of the safety team, so I cannot give you a very informed answer. We are happy to review any specific cases and give you a more specific answer.
We did that in advance. I do not know if you have seen them, but they were circulated to the company.
You have had them for a week, I think.
On those four specific cases, two accounts got suspended; one piece of content got actioned under our FOSNR policy, so it got massively deamplified on the platform and labelled as a violation of our violent speech policy; and a third post was in no violation. Of the posts that you sent to us, three have been clearly—
May I just ask the question: were they removed?
Yes.
Completely removed?
Yes. Two accounts got suspended.
What was the timescale from you being made aware of the content to actually removing it? Was it within hours, days or weeks?
When it got reported and we looked at it as a team, it was in a couple of hours, so just the time for the team to conduct its review of the post. For some violations, we can be very quick because they are very straightforward. In some other cases, we want to allow the agent time to understand what the situation is before they take action, so we cannot work under a specific timeframe. But what we are trying to do is work as fast as possible. We also try not to make mistakes. That is why, in some cases, we need to take more time.
We have just checked, and one of the posts we shared with you is still online, so it has not been taken down.
But, as I was telling you, one of the posts is in no violation. Three have been removed and one is not in violation of our policies.
What does the acceptable one say? “Most David Amess picture I’ve ever seen”. That was about Rupert Lowe. That is the one that remains up. Why does it remain up?
How do you see that to be in violation?
We are back to where the bar lies. It seems to have gone up rather than down. At one time, that would have been removed, in my view. I think that is the difference.
Is the implication of that post not, “Here is someone who should finish up the same way as David Amess did”? That is the conclusion I would take from that.
Somebody who was murdered.
Do you know who David Amess is?
The content is a picture of an MP—I do not want to publicly discuss the case, obviously—and then the comment says it looks like David Amess. That is all. There is no further context. It is just a picture of the MP.
I am sorry, but there is a context. Our colleague David Amess, who was here before my time, sadly was killed—
Of course. We know that. It is horrific.
And the implied context of that post is that this individual is referencing another MP—
A murdered MP.
The context is very clear to us as MPs and to the general public, and what we are not understanding is why the platform does not understand the context and will not remove that content.
Look, you could see that in different situations. I do not know if you have seen the post, but it is a picture of an MP—
I have seen the post.
It could be to say that it physically looks like another MP. It doesn’t mean that—
Oh, come on!
You don’t really believe that, do you?
Well, I have asked you to go away and come back with some of the answers.
Megan, Meta’s policy is to remove threats of violence to public figures only if they are “credible”. Why should a threat of violent injury have to be credible to be abusive, and how do you assess its credibility? Given that you have end-to-end encryption, how would you know that there were people plotting to harm or harass a politician?
I will take the two questions in turn. On your first question, it is important to recognise that we do have policies in place that include policies against violence and incitement, and where we find that content, we remove it. That will include implicit and explicit threats. But often content moderation is a complex exercise. We have expert review teams that do that, and we take into consideration many different factors on a case-by-case basis. We often factor in multiple rules at once when we make our content moderation decisions.
But how do you define something as credible? Is it entirely based on context?
It is hard to speak in the abstract without actually having a case to refer to. We have highly specialist teams that consider various different factors, including things such as whether a turn of phrase has been used, the severity of the content and the credibility of the content. Multiple factors can be engaged in any one post. This is also why, at Meta, we have set up a dedicated escalation route for all MPs to use, which you can access on the parliamentary intranet. Their name is SIRAS and they have a dedicated escalation channel straight to our review team, so if there is any additional context or if there is anything that anyone from this House is ever concerned about, that can be looked at straight away.
What about end-to-end encryption and plotting to harm?
You are correct in saying that we have WhatsApp, which is a private messaging service that is end-to-end encrypted. We know that many people value end-to-end encryption and privacy, but that is not to say that means we do nothing around this area. We absolutely do not want abuse and harassment on WhatsApp, as we don’t on our other platforms. Of course, the nature of the service is different, but we do still put in place some safety measures. The first thing to say is that you cannot discover people on WhatsApp in the same way you can on other social media platforms. You need somebody’s phone number in order to connect with them. When you get a first reach-out from a new contact, a safety notice pops up, and it will include certain information, including the country where that person has come from and whether you are in any shared groups. In that safety notice, we make it very easy to block and report. Then finally, if there is something of concern on WhatsApp, that can be reported to us, and at that point our systems can run—
But you do not monitor those conversations so that you know whether somebody is plotting to harm somebody or is a terrorist.
The nature of the service is different. It is entered into groups. That being said, we work with law enforcement on valid legal requests. We can also see the non-encrypted parts of the app. For example, profile pictures and names are not encrypted, and those are areas that we can look at.
We gave your colleague from X an example. We have also given you an example—the post about Andy McDonald, which I think was in response to his football team doing well, to which somebody replied, “Come in to Middlesbrough you will get a good slap, you waste of space.” That is a threat of violence. Is that not credible?
I appreciate the Committee’s sharing that, along with a couple of others, in advance. Rest assured: I have sent that straight to the relevant teams to take a deeper dive in. I will set out the broader context of the three pieces of content that were shared. One has been removed. One we are still waiting on the link, but the review teams do think that it is likely violating, and this one is still in the process of being reviewed by our teams. I am sorry, I do not have a definitive answer on that one, but I will follow up with the Committee as soon as I have the outcome of that investigation.
It was posted over a year ago.
I understand that when that example was first reported, it was found previously to be non-violating; but sometimes our systems do make mistakes. That is exactly why I have sent it to our review teams to re-review that decision. As soon as I have the outcome of that I will write to the Committee. But just to reiterate, it is also in part the importance of that SIRAS reporting channel. So if there is anything, that is an expedited route of appeal that MPs have access to.
Can I come back to the question of credibility? It is very important that we understand, as a Conference, the position that your platforms take—that you have a chance to set it out accurately. Credibility can mean a number of different things. It could mean to a police officer whether or not someone actually has the means or the intent to follow through on what they say. What I want to understand is, is that what your platform means by credibility? I think it is worth exploring, is it not, whether your platform has a broader interest in whether people are making threats of violence, and the influence that may have on the public discourse, and on how the recipient of that threat, if they see it, may feel about it. So I want to understand precisely what you mean by credibility. Your platform’s policy, as we understand it—tell us if we have misunderstood—is to only remove threats of violence to public figures if they are credible. What does credible in that context mean—that someone has the intent and means to follow through on their threat of violence, or does it mean something else?
As I was saying previously, I think when it comes to our content moderation decisions—it is important to say that I am not in our content moderation team, but I will explain the policy—we take into consideration multiple different factors, and context is often very important. There are various things; multiple policies can be working at once.
Forgive me: I am not asking you to tell us about an individual decision for an individual piece of content. I am asking you to explain what your policy means. What does the word credible mean?
It can include implicit and explicit threats. But to reassure the Committee, we do work very closely with law enforcement. In the most serious cases in London, we have a law enforcement engagement team that work directly with the police on a range of cases, so we do have that communication pathway open with the police, including the parliamentary protection police.
I am sorry to press you, but that is not an answer to my question. I asked you very specifically what the word credible means in this context. It is not a question about what process you apply in relation to law enforcement. It is not a question about an individual piece of content. It is a question about what the policy means. What does the word credible mean?
It is both implicit and explicit, but perhaps it would be helpful for me to share the full details of the policy, which I do not have in front of me, after the session.
If you could let us know in writing, that would be great.
Yes.
Moving on to how we look at addressing the digital threats, I will start with Dr Rossini. I am thinking about technical solutions like automated content filtering or changes to recommendation algorithms, which could hopefully reduce the volume of abusive content. What, in your view, are its limits, especially when it comes to things like AI, and just generally what are your thoughts on using that technology to address some of what we and others are experiencing?
I think the experiences shared here today show that whatever systems are in place, they do not seem to be working super-well. It seems that everybody has been reporting things, but these things are not being taken down. Another concern that I personally have and, I imagine, members of the Committee share is that a lot of the burden is placed on the target—on the victim. Members of Parliament are supposed to go through and report all sorts of horrendous things that are sent to them, and hope that these things will be taken down maybe one day—eventually—if it does happen. Are there technical solutions to address this issue? Yes. Will they work all the time? Probably not. However, the most concerning part of everything that I have heard so far is that we are talking about Members of Parliament, who allegedly have fast-track or special access to moderation teams and so on. When I think about the broader remit of candidates and people who are not yet in this position of privilege, it concerns me a lot more that for the regular user, including candidates and people who are running for election, a lot of the tools that should allegedly help to address these issues are just not doing that. In terms of what could be done, with AI it might become easier to find and target these kinds of content. It will also become a lot easier to make these kinds of hateful comments and to spread them fast. In that, we are in a sort of cat-and-mouse game. A concern that I personally have when we think about AI or automated moderation is that in certain contexts—for instance, you just shared a tweet with a picture where an association was made with a parliamentary Member who had been assassinated. No automated content moderation can ever pick that up. If you do not have, at some point in time or in certain situations, humans brought in to understand what is happening, that kind of content will continue to be there as it is. Even with the special access and privilege that you have to be able to reach out to the companies, it is still there because they do not see the violation that it seems everybody else might. In terms of other things that could be done, it is the case especially with other platforms, including Instagram, that some people have moved to just shutting off any contact or any comments. But again, my concern is that this places unreasonable expectations on targets or victims not just to manage the intake of abuse, but to limit or restrict themselves to stop suffering abuse. This will surely affect disproportionately, as I am sure the Committee has already heard in sessions that have happened before, Members of Parliament who are from under-represented groups, or women, or both if women are also from under-represented groups. It also means that Members of Parliament will continue to have to self-censor, self-moderate and lose out on opportunities that social media offers them to connect and engage meaningfully with constituents. Part of the reason why we are discussing this with the platforms is that it is hard to envision an election taking place with social media not being a part of it. It is hard to envision your work as politicians and communicating with the public in a world where these platforms no longer exist. So there must be, has to be, a way forward so that the responsibilities of platforms are actually commensurate with the impact that they have. I think that, with the current legal frameworks that we have in this country, that is still not the case.
Thank you. That is really helpful. Megan and Claire, I wonder if you can respond on that. In particular, I am interested to know whether you are using tools already and whether they are proactive or reactive. I know that I am giving you quite a lot here, but it is important to dig into it. We have touched on the fact that the recommendation algorithms seem to amplify this negative content. I can speak from my own experience in the last week: where you have a particular post that erupts into a negative situation, it then spirals, so you suddenly find 50,000 people have seen the post, and within an hour or two you have 190 comments, all of which are incredibly negative, filled with abuse and potentially misinformation. Can you comment on that?
In terms of the first bit of your question about proactive and reactive, yes, absolutely, we increasingly use automation to find and remove harmful content at the point at which it is posted—before anybody has to report it to us—and we document how effective we are at doing that in our community standards enforcement report, which we publish on a quarterly basis. If I take violence and incitement as a category, for example, 95% of that on Facebook was removed proactively by our systems before anybody had to report it to us. For hateful conduct, that was 88%. We are making significant investments in that automation to do that proactive finding and removing content before it is reported. We also have 15,000 content reviewers that go alongside that. That is our multifaceted content moderation process. For content that does not violate our rules but that people may still find offensive—swearing may be an example, as different people have a different tolerance for swearing—we have introduced a range of different tools that people can implement to manage their own experience. If I pull out one, for example, we have got something called “hidden words” where we set a default list of words that some people may find offensive, and if you turn that on and you get a message or a comment containing any of those words, it goes into a separate inbox where you would not have to see it. It is all moderated by our systems, and all of it can be mass-deleted and mass-reported.
Specifically on amplifying negative, abusive and potentially misinformation content, is that something that is happening?
We have absolutely no incentive whatsoever to amplify or promote harmful content. That is absolutely not how our algorithm works. In fact, we use AI to find and remove harmful content. We are an advertiser-based business, and advertisers do not want to see their ads around harmful content. We know that because we have seen advertiser boycotts, for example. We also know that users do not want to see that type of content either because they tell us that in surveys. We do everything we to find and remove harmful content. That is why we have made such significant investments in this area, and it is why we have 40,000 people in the company working on this.
Thank you so much for the question. I will also take this occasion to answer the question on the related point. So, yes, absolutely, we use a mix of AI and sophisticated machine learning systems to proactively remove content on X, but it is combined with human moderation. It is important for us to have those two aspects, because for certain content it is important to have a human review to get a better understanding. AI is useful to be able to remove content at scale. To Professor Rossini’s point, sometimes a user may feel that it is a bit like emptying the ocean with a little spoon when they report content individually. The idea of being able to be a bit more proactive and removing content at scale is about relieving that burden from people’s shoulders when they get abuse. When it comes to terrorism content and child sexual abuse imagery, we are being quite good with AI. The systems are working quite well and the machine is really well trained. When it comes to speech and text, there is still a bit of a learning curve for AI to be good at it. But we invest in the system. Also, we report on our progress. If you look at our transparency report, we indicate the amount of content, for instance, that would be removed with automation and the amount of content that would be removed manually, which means that it is an agent that removes the content. That is also sometimes important, to the point that Mrs Rossini was making earlier on the post that I mentioned before. It is important that someone looks at it, because the machine would not have the global understanding of this post. So in some situations it is quite important. To remove this content proactively we use what we call a heuristic system that can recognise keywords, behaviour and text in order to proactively remove it. That also answers the previous question. It is also a key aspect to work with civil society. Sometimes, on cases of abuse, we work with civil society as trusted reporters on our platform. There are certain situations where it is difficult for the victim to report because they are affected and not in a position to report on the platform. So this civil society actor can do that for them and we are in co-operation with them on this occasion. When it comes to the algorithm, we try to use AI to filter out of the algorithm content that is harmful or illegal. We don’t want this content to surface because it is not in the interests of the platform. We want the platform to be safe. Our business model is based on advertising, so it is not in our interests to have this content surface to users. We are trying to de-amplify content that could have a certain amount of toxicity for users. It is not being amplified now. Thinking about women, we could also extend to more vulnerable groups using the platform. What we put in place is, of course, the rules and the enforcement of the rules, but also the safety features. We allow people to use the feature to block other users or to mute them or also to pass their profile as protected for a certain amount of time. When they have a bad experience on the platform, they are able to adapt their settings as well. Of course, this would not replace the moderation work. It very much comes as an addition. But you try to put in place a different setting for them to use. I just want to use this occasion to say that I will ask the team to review again the posts that you were discussing before. I will ask for a further review.
We know that X’s approach to these things changed when Mr Musk bought Twitter. We know the comments that he posts, but let us look at Meta, which changed with the presidential elections. As soon as Vice President Vance promoted freedom of speech as the most important thing, it put people’s ability to say what they want ahead of stopping abusive content.
We are always iterating on our content policies. We have had those in place for 21 years and over that time we have—
You changed the approach to how you proactively monitor this.
We made some changes in January, but we are always looking at our content policies and the best way to—
Why did you make the changes, then?
On some of the changes that we made in January, we had feedback that we were over-moderating on legitimate speech.
From the White House?
No, from our user base. We have made a change and are now focusing on the highest-severity harms. Those are the sorts of changes that we have made but, to be clear, we still have content policies in place.
But you made changes and clearly there was a context to that. We have talked about context before. Claire, we just talked about what users can do to block unacceptable content, but in the end isn’t the real problem that the people using your platforms—X and Facebook—feel emboldened to say what they want because they are anonymous? That is the key issue here. Shouldn’t there be a requirement for you as platforms to make the users who make these comments on your media platforms reveal who they are?
This is a very interesting question. If you put your real name—your first and last name—would there be less abuse? I think it is a valid question, and there are pros and cons. We are a global platform, and in certain countries it makes sense that people do not express themselves using their real name, because they could be put in danger. We need to take that into account. I think the real issue that we are seeing is more like the feeling of impunity. People feel that they can post something on social media and there will not be consequences, but that is not the case. We have seen situations where people say something on social media and are prosecuted for it. They have to pay a fee, for instance. There can be consequences: they can be brought to court for what they post on social media. Also, because we do co-operate with law enforcement—
But they won’t get prosecuted if you don’t tell the police who they are.
We tell the police in most cases; we do co-operate with them. You can absolutely check with the NCA—
Could you give us some information about the number of requests you have had in the last year from the police to reveal people’s identity, and how many you have accepted and acted upon?
What we disclose in the transparency report is that, for Europe, in 90% of cases we remove content on the basis of a law enforcement report—
And when the police ask for identity? That is the question I asked.
On giving users’ identity, which has a higher bar in terms of consequences, it is in 50% of cases. But I want to say that it is not that we are not giving user identity. We give that information on a voluntary basis to law enforcement. When we don’t do that, we ask them to use the MLAT procedure—a legal treaty procedure that is in place for these situations. Essentially, we ask them to follow a process—
Could you give us a report on how many requests you had from the police in the United Kingdom in the last year, how many you accepted and gave the identity of the people involved, and how many you refused?
I think we can provide that to the Committee—absolutely.
Can we come to Meta on revealing who your platform users are?
Yes, absolutely. I know this was a long-standing conversation during the passage of the Online Safety Bill. This debate has been going on for a while. One thing to consider is that there are some good reasons why people may choose to be anonymous online—they may be from the LGBT community and can’t be their true identity, or they may be a survivor of domestic abuse—so there are some trade-offs and interests to consider. What I would say is that it doesn’t necessarily matter if your name is Mickey Mouse on our platforms. We do respond to valid legal requests, and we can give information such as phone numbers or email addresses.
Can you do a report, on the same basis as the one that I have just asked X for, about the number of requests you have had from the police, how many you have responded to positively and how many you have turned down?
I can look into that.
Patrícia, you were nodding away, or shaking your head, during the course of that discussion. Would you like to make your comments?
There are a couple of things to talk about. First, there is a lot of misinformation about anonymity. For a long time, we have tended to think that there is so much abuse online because people are anonymous. For those of you who have experienced abuse online, I am sure that not all of it is. The fact that there is still a lot of abuse on platforms where people have pictures with their children and pets means that anonymity is not the culprit; it is more the disconnection that people have, in terms of their online actions and the offline consequences. That has to do with regulation, but not just: it also has to do with platforms’ enforcement and things like that. I am Brazilian—you may not know that—and Brazil is a lot more proactive in passing legislation to regulate illegal online content, including racist content, which has been illegal for a very long time in Brazil, and that has been enforced in online communities for a very long time. That has then forced platforms to be a lot more proactive in keeping types of information and in collaborating with law enforcement and the courts, to make sure that these things are addressed properly. It is not impossible to do, but it is important to think about how we can treat these things systemically and not just on a case-by-case basis. To me, the answer to that is regulation that enforces and predicts how and when platforms have to respond, but also what kinds of consequences for individuals exist for the types of things that they say online. There is hope that the Online Safety Act will help us think and do that. To my knowledge, we have started with only very clear cases that platforms have been proactive about, which are mainly related to terrorism, trafficking, child abuse and pornography. Those are important cases to address, but we are talking about something else, and to my knowledge these cases are not yet in scope. Another challenge that we need to consider—I think this is what makes legislation so difficult—is that a lot of the discourse that we are talking about would fall under what some could argue is a grey area, if things are not clearly a violation. That is why there was such a push on what is credible and what is not credible, but also what means the platforms have to assess what is a credible threat. I am not sure they have the means; it doesn’t sound like it—I am not satisfied with the answers so far. Who is the one to judge? A lot of the discourse is, “Well, but could they do this?” But does it matter? If we consider the issues from the victim and target standpoint, and consider not just the anxiety and stress but also the downstream consequences in terms of politics and participation and engagement in politics, and even in continuing to work in politics, we really need to consider these issues from more of a, “Does it matter if it’s credible?” standpoint rather than a, “Can they do this or not?” standpoint. That is part of where the problem lies. Platforms have community standards and processes in place. We all know that they have been there for a very long time. They are revised. Thresholds can rise or lower. And it seems that that all has to go with the flow of the political climate. But the crux of the matter is that policies exist, but they are not enforced consistently. They are not enforced consistently when people reporting the same content have different outcomes. They are not enforced consistently when content takes so long to actually be addressed. And they are not enforced consistently when the victims have to repeatedly re-traumatise themselves to continuously try to protect themselves from all this online hate. We need to disentangle this. It is not just because there is anonymity; it also happens on non-anonymous platforms. We need to address that what community standards don’t do very well is to take into account the positionality of the victim and the stress that they suffer. Not only are they the ones that have to report—because in some of the policies there is explicit text that says if the victim does not report, the content will stay up there—but there is also this standpoint of, “Who is supposed to say whether this harms me or not?” I was thinking about the technical solutions we were talking about before, and the dynamics of continuous attack and continuous hate. When people are being massively harassed, or are for some reason being attacked—or being “engaged with”, to use the platform term—to a much higher extent than they generally are, why is there not a red flag? Why are there not monitoring teams who could see that this MP, for example, gets 30 replies a day but today they have 3,000—maybe there is something going on? There are other things that could be put in place to supplement the flaws of AI. AI will not fix us as fast as we hope—actually, probably never. But there are ways in which we can use automated systems to potentially better direct the human resources that are put into protecting people on these platforms.
If I understood the earlier evidence properly, the answer to that last point might be that the proactive monitoring of MPs’ profiles only happens at election time, and does not happen outside election time. We might ask you to clarify that. I wanted to ask about anonymity, which we talked about a moment ago with Clive. I think it is perfectly fair to say, as Megan says, that there is a difficulty with anonymity. There are good reasons for some people to want to be anonymous online. The compromise that the Online Safety Act has reached is that if you are a category 1 platform—both of you will be—you have to offer your users the opportunity to engage with only those whose identity can be verified. Do you expect that to be taken up widely? How will you be promoting it? The reason I am asking that in this context is because it is a big challenge for Members of Parliament and other politicians to do that. It is a big ask of them to say, “I will only interact with people who are prepared to verify their identity—if not to me, at least to the platform and beyond.” The question will become: how widespread is that practice? How many people do it? Will it become socially acceptable? Will it become the norm? That will rely substantially on what the platforms themselves do to promote that option and make it seem as though it is acceptable. What will you do in that line?
The Online Safety Act is still in the implementation process, and we are engaging closely with Ofcom on it. We have recently submitted the illegal harms risk assessment. We are currently working on the child safety risk assessment.
I accept it is not in force yet; it is about when it will be.
Yes. Ofcom is still to do the codes. It is hard to say until we know what that will look like, but we have a huge team working on compliance for that Act. If my understanding is correct, that is in the third category that Ofcom will be looking at, so it is hard to give a rounded view until we see what that looks like.
We are obviously engaged with Ofcom as part of the regulatory dialogue and the entering into force of the OSA. I think we are following the same calendar as the child safety code. We will be entering it next and then we will continue that engagement with them. I think Ofcom is looking at a potential set of additional measures. That is something we are looking forward to discussing with them. On the topic of people’s identity online, on X in particular, we give people who want to verify themselves the option to do so. We have launched a subscription service where people also need to make a little payment on the platform, so that is a way for us to verify and see whether there is a real person behind the account. That is also part of fulfilling our commitment to fight spam on the platform, because it is quite important for us not to have a fake account on the service, because that also reduces the user experience.  
Claire and Megan, what location data do you have access to for accounts that are posting anonymously—in terms of the metadata available in the back end of the system?
It would depend on the data we have about the person. We would be able to know sometimes that, for instance, the person who is not based in the jurisdiction where it is being investigated. Potentially it could be a challenge for law enforcement to prosecute them because they are not based in that jurisdiction. It is based on IP signals—so it is based on signals that might not tell us exactly where the person is, but it tells us where they might have last connected. That is what we would be using in that sense.
You have the IP address of the person, and you have presumably have the phone number, or can tell which country or region it is in?
It depends on what they registered with. For instance, they might have registered with an email address or a phone number. It would depend in that sense, but we get centre information that we would send to law enforcement.
What location data do you have for people who have posted from a phone? Can you tell what country it is from, which cell tower it has pinged from, or any of that kind of data?
Yes. In most cases, we would be able to relay that information to a specific country.
Could you include within the report that Clive has asked for the number of times that the IP address and the cell tower location has been provided for police requests? That would be interesting to know, given that the police would be able to triangulate who an individual was from that, irrespective of whether they had posted anonymously. I do not think the police receive that information from X. What about Meta?
Police can request data from us to use in their investigations. They do that through a law enforcement portal that police forces have access to. We have the law enforcement engagement team, which has a person based in London. I know that they can request different data points from us; I do not have the specifics.
Could you go back to your team and, likewise, ask for the report to include whether that the metadata for those data is provided? Megan Thomas indicated assent.
Dr Rossini, you pointed out how minoritised communities and women are particularly done over by social media. The category of women is one of those protected characteristics against which hate speech is not illegal, so it is a particularly odd sort of marker. Megan, why was it that Meta massively watered down its community notes in January? CNN ran the headline, “Calling women ‘household objects’ now permitted on Facebook after Meta updated its guidelines”. Do we know the before and after? Before the policy was a lot more rigorous. I know you said that there are thousands of people, but it is not just the number of people; it is a conscious decision to make a policy at head office that says, “It’s not illegal; we don’t need to bother with that.”
You are right that we made some changes to our content policies in January. We are always looking to keep people safe, but we do iterate those policies. We had some feedback that we were over-moderating legal speech, so now we have shifted in emphasis to focus on the highest severity harms. That being said, we have policies against behaviours that are disproportionately targeted towards women. We have strict policies against things such as gender-based violence, against content that may glorify gender-based violence, against misogynistic attacks, and against mass harassment. We also have a global women’s safety officer at the company, whose role is dedicated to advocating for women’s safety and working with women’s safety groups.
Do you have figures for before and after? The obvious inference is that abusive and misogynistic content will be easier to get through. In fact, Mark Zuckerberg says in this report, “we’re going to catch less bad stuff, but we’ll also reduce the number of…posts…that we accidentally take down.” It seems the wrong way around to say, “Oh, whoops. Sorry.”
We had feedback that we were over-moderating some legitimate speech.
Women everywhere—not just political women—are more at risk with this policy change.
There were situations in which conversations that could be had, say, on the Floor of Congress were not allowed on our platform, so we have made some adjustments. We are now focused on the highest severity harms. But to answer your question about increases since we made changes in January, we have not seen any dramatic spikes, or anything, in harmful content on the platform.
It would be good to know the figures, though, because it seems very concerning to any woman, not just political women, and we know that political women are in the public eye, so they get it more.
We absolutely agree that abuse and harassment directed towards female MPs is entirely unacceptable. We do not want to see that on the platform. That is exactly why we have those policies in place against behaviours that disproportionately impact women. It is also why we have introduced some of the tools that I spoke about, such as hidden words, and why we have that women’s safety officer in place. It is also why, if there is ever anything of concern, we have that dedicated escalation channel. If there is any additional context that would be helpful, that all goes straight to our review teams to look at.
Claire, I know you dismissed the idea that since your new owner, Elon Musk, took over things have got worse, but that led to the creation of Bluesky. Surely a lot of people are finding that X, as it is now called, has just turned into such an intolerable place to go that a parallel, Bluesky, has come in. As Samantha Dixon and Sir Mark said, with specific comments against, I think, Jess Philips and our leader Keir Starmer, Elon Musk seems to have picked a fight directly. Do you know that since we have been in this room the CEO, Linda Yaccarino, has resigned, and has cited this issue as why she has gone? Do you have any comment on that?
As I stress, again, the rules of the platform apply to all the users in the same way. We have rules for hateful conduct that would prohibit directly attacking people on the basis of their gender, so you cannot target people with misogynistic comments. That would be forbidden by the platform’s rules. I just cannot stress that enough.
I think everyone thinks that since it changed from being a little blue bird called Twitter to a thing called X, the climate has changed; and Linda Yaccarino has cited Grok. Before, this was manually done, and now there is this AI tool that corrects content, but it is not actually looking at the nasty stuff that people are raising here. She has resigned because Grok had a big mess-up yesterday night. Do you have any comment on that? It is not rosy if she is going. She is the CEO. She was meant to have a long career, but she has gone after 24 months, I think.
To my knowledge, she is the CEO of X.
She has stood down.
She has gone, 30 minutes ago.
I don’t have my phone for this Committee meeting, so I cannot comment on this, obviously.
All right; I won’t pick on you. Dr Rossini, do you want to say anything on this?
An important point that a lot of the discussions we have had today raise to me is the fact that we have to rely on companies being willing to share data—“Have things got better or worse since you changed this policy or that policy?”—because there is no actual data access for researchers to independently check whether there is more or less toxicity. With the change from Twitter to X, X started charging pretty hefty fees for researchers to get data access, meaning that virtually no research is being done on that platform any more. There is also the issue that a lot of people have migrated from the platform, as you mentioned. With Facebook or Meta platforms, it is actually even harder. There are procedures through which you are supposed to get data, but, from colleagues who tried, it is actually pretty impossible. The DSA and the UK Online Safety Act both have provisions for these things to change and for access to be possible in order for independent researchers to be able to verify the claims that we hear from platforms talking about what they are doing and not doing. But this is not yet in place. Pardon the analogy, but we are asking children, “Did you eat the chocolate or not?”, and we have to trust that whatever they say they did is true, because there is virtually no way for independent researchers to verify any of these things. The data is not available; even under the DSA or the provisions of the UK Online Safety Act, I don’t think the data that will be made available will actually be sufficient for us to address these questions. In the end, we have this big elephant in the room: there is no way of knowing what is actually happening other than asking platforms and considering their incentives to give the answers they give.
It is good to see the human face of these two, because we have never met you before and it always looks a bit anonymous. So I am glad to see you. I just want to echo what Mr Speaker said—
I do want somebody else to ask a question. Leigh never got in last time or this time.
I will be very quick, and would appreciate a speedy response so that we do not run over our time. I would like to know, if possible, what your platform’s target timescales are for content moderation. It would be good if you could give a really quick answer. I would like specifics on this, so if you cannot give that to me today, please write to the Committee. I am really interested in the number of people who were working in content moderation at X before and after Elon Musk took over, and in the numbers that you are seeing in terms of moderation and items being taken down. Could we also have the same conversation with Meta, specifically around Facebook and in terms of the changes made this January? It seems—following on from what Rupa said—that significant changes have happened as a result of that.
Do you want to take that away and let us know? We are past the time when we were expected to finish. Or do you want to quickly try to answer it?
Just to quickly answer your question on the process and timescales, I think it is worth saying that it is a multifaceted process; we use teams and technology. We use automation to try to remove violating content at the point at which it is being posted, and our “Community Standards Enforcement Report” shows that we are getting better at doing that over time. We also have 15,000 content moderators, and the human reviewers prioritise the most severe and violating content, and content that has a high likelihood of violating our rules.
I realise that time is up, but there are questions about the Online Safety Act, which we had high hopes for. Maybe we do not have time for responses but if I put the questions, you could come back on them. First, the Act is now in place and phase 1 requirements are with you; what actions have you taken to comply with those? Secondly, if you have taken any actions—I know that Dr Rossini says you are not very good at giving us data—what has been the impact of those changes and, over the longer term, what preparatory action are you taking for the phase 3 requirements, which I know are not due for some time? Maybe you could write to the Committee with your answers. We have passed the Online Safety Act. We believe that it is an important tool, but we want to know how effective it has been.
I can give you a short answer to that. First, we use automation and human moderators to moderate. We have 1,486 people worldwide working on moderation. Our strategy is to rely on FTE, people who are employed by X, for our moderation. We recently opened an excellence centre in Austin. Our objective is to continue that, and have people in house, who have better expertise on the different policies, to continue to moderate the platform. On the side, we are also investing in proactiveness and AI tools to help the agents when they moderate the platform. On the OSA, I think you are absolutely right that it is a very important piece of legislation. We are seeking to be in compliance with all rules, regulations and laws that are applicable to X, and that includes the OSA. We are regularly engaged with Ofcom with that objective in mind. That is something that we consider very important.
May I say thank you? We had other questions that we have not got around to; that is how important this session was. We will write to you, and if you have any comments that you would like to come back on, please do so. Thank you for giving up your time today, but we have more questions. If you are happy with it, we will be in touch with you.   [1] X corrected this to 500 million posts every day in their written evidence SCS0055