described in your testimony, or both? If it was a consequence of Terms of Service
violations, how and why would such violations render affected accounts invisible to Twitter
users?
In July 2018, we acknowledged that some accounts (including those of Republicans and
Democrats) were not being auto-suggested even when people were searching for their specific
name. Our usage of the behavioral signals within search was causing this to happen. Specifically,
if an account had a large number of followers who violated our terms of service, it impacted the
visibility of the account. To be clear, this only impacted our search auto-suggestions. The
accounts, their Tweets, and surrounding conversation about those accounts were still showing up
in search results. Once identified, this issue was promptly resolved within 24 hours. This
impacted 600,000 accounts across the globe and across the political spectrum. And most
accounts affected had nothing to do with politics at all. In addition to fixing the search
auto-suggestion function, Twitter is continuing to improve our systems so they can better detect
these issues and correct for them.
Twitter had made a change to how one of our behavior based algorithms works in search
results. When people used search, our algorithms were filtering out those that had a higher
likelihood of being abusive from the “Latest" tab by default. Those search results were visible in
“Latest” if someone turned off the quality filter in search, and they were also in Top search and
elsewhere throughout the product. Twitter decided that a higher level of precision is needed
when filtering to ensure these accounts are included in “Latest” by default. Twitter therefore
turned off the algorithm. As always, we will continue to refine our approach and will be
transparent about why we make the decisions that we do.
8. Mr. Dorsey, in response to a question about the Meghan McCain incident and the
inadequacies of Twitter's abuse prioritization mechanism, you indicated “[i]n this
particular case, the reason why was because [the violent and physical harm element] was
captured within an image rather than the tweet text itself”
(emphasis added). Is currently
Twitter without the technological tools to police harmful and abusive content embedded in
either images, .gifs, links, videos, and audio clips? If yes to any, how do human reviewers
police harmful and abusive content embedded in either images, .gifs, links, videos, and
audio clips?
Twitter strives to provide an environment where people can feel free to express
themselves. If abusive behavior happens, Twitter wants to ensure that it is easy for people to
report it to us. In order to ensure that people feel safe expressing diverse opinions and beliefs,
Twitter prohibits behavior that crosses the line into abuse, including behavior that harasses,
intimidates, or uses fear to silence another’s voice.
Anyone can report abusive behavior directly from a Tweet, profile, or Direct Message.
An individual navigates to the offending Tweet, account, or message and selects an icon that
reports that it is abusive or harmful. Other options are available, for example posting private
information or a violent threat. Multiple Tweets can be included in the same report, helping us
gain better context while investigating the issues to resolve them faster. For some types of report
6