Maximize access to information

At Google, we believe in open access to information. We believe that society works best when people have access to information from many sources. That’s why we do not remove web results from Search, except in very limited circumstances. However, when listings and other information are presented as Search features, users may interpret the information as having greater quality or credibility. In those cases, we apply more restrictive policies.

Illustration of a person searching for best indoor plants on Google

Defining dedicated policies

Our policies for web results

Web results are web pages, images, videos, news content, or other material that Google finds from across the web. In keeping with our commitment to maximize access to information, we do not remove web results except for specific reasons covered by our overall content policies for Google Search, which includes child sexual abuse, highly personal information, spam, site owner requests, and valid legal requests.

Illustration of Search results circling a long page representing Google’s content policies

Our policies for Search features

Search features include panels, carousels, enhancements to web listings (such as through structured data), predictive and refinement features, and results and features spoken aloud. Even though these features are automatically generated, we understand that people might perceive them to be more credible because of how they’re presented. We also don’t want predictive or refinement features to shock or offend people unexpectedly. This is why we have Search features policies to cover issues, including barring harassing, hateful, and violent content. In addition to these, some Search features have more specific policies.

Illustration of Search results comparing different foods and recipes

Why problematic content may appear

Since Search encompasses trillions of pages and other content across the web, occasionally results may contain content that some find objectionable or offensive. This may especially happen if the language used in a search query matches closely with the language that appears within problematic content. It might also happen in situations where fairly little useful or reliable content has been published that aligns with a particular topic. Such problematic content does not reflect Google’s own opinions. However, our belief in open access to information means that we do not remove such content except in accordance with our specific policies or legal obligations.

Illustration of Search results with one topic flagged with a warning label

Addressing policy-violating content

Google uses automation to discover content from across the web and other sources. Automated systems like our search algorithms surface what seems to be the most useful or reliable content in response to particular queries. Automation also helps power our SafeSearch feature, preventing explicit content from appearing in search results.

Google processes billions of searches per day. In fact, 15% of the searches we process every day are ones we’ve never seen before.

Illustration of various tools and metrics providing feedback on Search results

Automation is also generally Google’s first line of defense in dealing with policy-violating content. Our systems are designed to prioritize what appears to be the most useful and helpful content on a given topic - and not surface content that violates our content policies. No system is 100% perfect: if our automated systems miss policy-violating content, we always look for ways to improve them for the future.

In some cases, we may also take manual action. This does not mean that Google uses human curation to rearrange the results on a page. Instead, humans are used to review cases where policy-violating content surfaces and take manual action to block this content in limited and well-defined situations.

Discover more