Why is my sitemap not being indexed




















Google just updates the read date, nothing more. Hi akhtarjii ,. Please feel free to open a new thread should you have further concerns. Skip to content WordPress. Skip to content. Resolved akhtarjii akhtarjii 1 year, 2 months ago.

Viewing 7 replies - 1 through 7 of 7 total. Plugin Support Michael Tina mikes 1 year, 2 months ago. Thread Starter akhtarjii akhtarjii 1 year, 2 months ago. Plugin Support Maybellyne maybellyne 1 year, 2 months ago.

For example, did Google opt for a different canonical or are redirects causing indexing issues? Google has indexed these URLs, but they were blocked by your robots. Action required: review these URLs, update your robots. Google has indexed these URLs, but Google couldn't find any content on them.

Possible reasons for this could be:. Action required: review these URLs to double-check whether they really don't contain content. If everything looks fine, just request reindexing. The "Excluded" section of the Coverage report has quickly become a key data source when doing SEO audits to identify and prioritize pages with technical and content configuration issues.

Here are some examples:. Action required: if these pages shouldn't be canonicalized, change the canonical to make it self-referencing. Additionally, keep an eye on the amount of pages listed here. After that period, Google may bring these URLs back up to the surface. We always recommend taking additional measures to truly prevent these URLs from popping up again.

This means Google has not found signals strong enough to warrant indexing these URLs. It's critical to remember that blocking a subset of URLs through the robots.

To ensure a specific page or subset of pages are not indexed, utilize the meta robots noindex directive. Action required: make sure that Google and other search engines have unrestricted access to URLs you want to rank with. Google couldn't access these URLs because they received 4xx response codes other than the , and This can happen with malformed URLs for example, these sometimes return the response code.

If you don't want to rank with these URLs, then just make sure you remove any references to them. If there are, you need to investigate why, because that would be a serious SEO issue. If your staging environment is listed, investigate how Google found it, and remove any references to it. Remember, both internal and external links can be the cause of this. If search engines can find those, it's likely visitors can as well. With January 's Index Coverage update opens in a new tab , the crawl anomaly issue type has been retired.

Instead, you'll now find the more specific issue types:. Possible reasons why a URL may have this type:. If you do find important URLs, check when they were crawled. Unfortunately, it also requires you to play detective because Google won't actually tell you why the URL isn't indexed.

Reasons could include: thin content, low quality, duplication, pagination, redirection, or Google only recently discovered the page and will index it soon. If you find the page is actually important and should be indexed, this is your opportunity to take action.

Action required: keep an eye on this. If the number of URLs increases, you might be having crawl budget issues : your site is demanding more attention than Google wants to spend on it. This URL state is part of the natural process to some extent, and keep in mind that this report can lag a little behind the actual state. Always first confirm with the URL inspection tool what the actual state is, and in case a huge amount of important pages hang around in here: take a careful look at what google is crawling your logfiles are your friend!

These URLs are duplicates according to Google. Google found these URLs on its own and considers them duplicates. Even though you canonicalized them to your preferred URL, Google chooses to ignore that and apply a different canonical.

This can happen when a website has similar content with a small localisation to different markets, or pages that are duplicated across a website. Google may still serve the right URL, however, it shows the title and description of their selected version. Some possible solutions are to create unique content if your hreflang is not enough or to noindex the copies of these pages. Please note that this type is very similar to type Duplicate, Google chose different canonical than user , but is different in two ways:.

When performing website migrations , it's a common best practice to keep the XML sitemap that contains the old URLs available to speed up the migration process. Double-check whether there are any internal links pointing to these pages, as you not want these noindex'ed pages to be publicly available. Please note that, if you want to make pages inaccessible, the best way to go about that is to implement HTTP authentication. If you know that your site generates a large amount of content that should have the "noindex" tag, check to ensure it's included in this report.

If you don't redirect to a highly relevant alternative, this URL is likely to be seen as a soft Alternatively, these errors can be the result of redirects pointing to pages that are considered not relevant enough by Google.

Take for example a product detail page that's been redirected to its category pages, or even to the home page. On e-commerce sites I often see soft errors. Always take a look, and see if your content makes sense, or if you've redirected this URL whether it's a relevant URL.

Here are some examples of potential issues Google may have run into:. Action required: investigate why the URL returned a 5xx error, and fix it. Oftentimes, you see that these 5xx errors are only temporary because the server was too busy. Make sure to check your log files and rate limiting setup. Using software to block scrapers or malicious users can result in search engine bots getting blocked too.

Usually requests are blocked before the request reaches the server where logfiles are collected so don't forget to check both sources to identify possible problems. This is one of the first things to check after a site relaunch or migration has taken place.

Large aggregator or e-commerce sites tend to leave important directories disallowed within their robots. A noindex robots directive is one signal of many to indicate whether a URL should be indexed or not. Canonicals, internal links, redirects, hreflang, sitemaps etc. Google doesn't override directives for the fun of it, ultimately it is trying to help! Where contradictory signals exist, such as canonical and noindex being present on the same page, Google will need to choose which hint to take.

In general, Google will tend to choose the canonical over noindex. This type is highly similar to the Soft type we covered earlier, the only difference being that in this case you submitted these URLs through the XML sitemap. This type is highly similar to the one below, the only difference being that in the case of a HTTP response login credentials were expected to be entered. Action required: if these URLs should be available to the public, provide unrestricted access.

Google will index an entire site if it is a top performer and that takes time- trust me. Here is the good news. If you are willing to accept what is true, then you can deal with it by creating the best site available and anyone can create a top performing site, but it takes time. No new site come out of the gate a winner.

You have to run the field for a while to know how to compete and what races you should run and what tracks are in your favor. My immediate advice is to create great content, making the site simple to use and index, and work on performing well in the SERPs.

Do not worry about what you cannot control and work on what you can control. The only thing that can make Google interested in your website on top of decent content and site-structure are links from other websites that point to your site Start with robots. Google follows rules in robots. If it does, then that's why google won't index, just because robots. When that's done, follow closetnoc's advice and build a quality website with quality links, then submit a sitemap of all the important links.

Also, use various google's tools including the Fetch as Google tool in webmaster tools and google's page-speed insights tool to test random pages of your site to make sure they are fast and that they follow google's webmaster quality guidelines. All this can be found with google. Also, in webmaster tools, if you select the gear icon at the top-right in webmaster tools, and go to site settings, change google's crawl rate for the site to 2 requests per second instead of 0.

One more thing. Make sure your content is original. An example of duplicated content is you running a website where you pull in news stories from a major news website and placing minor decorations around it and claiming it your own. Having a site like that will very unlikely be indexed with any search engine. It is not enough just to create a sitemap and submit it. To get URLs indexed you have to link to them on your website, ideally from multiple places each. See: The Sitemap Paradox. You also have to get enough inbound links to your site so that Google sees that it is popular enough to deserve to have that many URLs indexed.

For a brand new site with no inbound links, Google may only choose to index a handful of pages. Once your site is more established and has many recommendations in the form of inbound links then Google will index hundreds of pages.

Then just click add a sitemap. Check that your url is correct, and wait for Google results, after some min the result will be displayed. Sign up to join this community. The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams?



0コメント

  • 1000 / 1000