Friday, December 06, 2019


Qualify your outbound links to Google

For certain links on your site, you might want to tell Google your relationship with the linked page. In order to do that, you should use one of the following rel attribute values in the <a> tag.
For regular links that you expect Google to follow without any qualifications, you don't need to add a rel attribute. Example: "My favorite horse is the <a href="https://en.wikipedia.org/wiki/Palomino">palomino</a>." For other links, use one of the following values:
rel ValueDescription
rel="sponsored"
Mark links that are advertisements or paid placements (commonly called paid links) as sponsoredMore information on Google's stance on paid links.
NOTE: The nofollow attribute was previously recommended for these types of links and is still an acceptable way to flag them, though sponsored is preferred. 
rel="ugc"
We recommend marking user-generated content (UGC) links, such as comments and forum posts, as ugc.
If you want to recognize and reward trustworthy contributors, you might remove this attribute from links posted by members or users who have consistently made high-quality contributions over time. Read more about avoiding comment spam.
rel="nofollow"Use the nofollow value when other values don't apply, and you'd rather Google not associate your site with, or crawl the linked page from, your site. (For links within your own site, use robots.txt, as described below.)
Links marked with these rel attributes will generally not be followed. Remember that the linked pages may be found through other means, such as sitemaps or links from other sites, and thus they may still be crawled. These rel attributes are used only in <a> tags (because Google can follow only links pointed to by an <a> tag), except nofollow, which is also available as robots meta tag.
If you need to prevent Google from following a link to a page on your own site, use the robots.txt Disallow rule.
To prevent Google from indexing a page, allow crawling and use the noindex robots rule.
Was this helpful?

Keep a simple URL structure


Keep a simple URL structure

A site's URL structure should be as simple as possible. Consider organizing your content so that URLs are constructed logically and in a manner that is most intelligible to humans (when possible, readable words rather than long ID numbers). For example, if you're searching for information about aviation, a URL like http://en.wikipedia.org/wiki/Aviation will help you decide whether to click that link. A URL like http://www.example.com/index.php?id_sezione=360&sid=3a5ebc944f41daa6f849f730f1, is much less appealing to users.
Consider using punctuation in your URLs. The URL http://www.example.com/green-dress.html is much more useful to us than http://www.example.com/greendress.html. We recommend that you use hyphens (-) instead of underscores (_) in your URLs.
Overly complex URLs, especially those containing multiple parameters, can cause a problems for crawlers by creating unnecessarily high numbers of URLs that point to identical or similar content on your site. As a result, Googlebot may consume much more bandwidth than necessary, or may be unable to completely index all the content on your site.
Common causes of this problem
Unnecessarily high numbers of URLs can be caused by a number of issues. These include:
  • Additive filtering of a set of items Many sites provide different views of the same set of items or search results, often allowing the user to filter this set using defined criteria (for example: show me hotels on the beach). When filters can be combined in a additive manner (for example: hotels on the beach and with a fitness center), the number of URLs (views of data) in the sites explodes. Creating a large number of slightly different lists of hotels is redundant, because Googlebot needs to see only a small number of lists from which it can reach the page for each hotel. For example:
    • Hotel properties at "value rates":
      http://www.example.com/hotel-search-results.jsp?Ne=292&N=461
    • Hotel properties at "value rates" on the beach:
      http://www.example.com/hotel-search-results.jsp?Ne=292&N=461+4294967240
    • Hotel properties at "value rates" on the beach and with a fitness center:
      http://www.example.com/hotel-search-results.jsp?Ne=292&N=461+4294967240+4294967270
  • Dynamic generation of documents. This can result in small changes because of counters, timestamps, or advertisements.
  • Problematic parameters in the URL. Session IDs, for example, can create massive amounts of duplication and a greater number of URLs.
  • Sorting parameters. Some large shopping sites provide multiple ways to sort the same items, resulting in a much greater number of URLs. For example:
    http://www.example.com/results?search_type=search_videos&search_query=tpb&search_sort=relevance
       &search_category=25
  • Irrelevant parameters in the URL, such as referral parameters. For example:
    http://www.example.com/search/noheaders?click=6EE2BF1AF6A3D705D5561B7C3564D9C2&clickPage=
       OPD+Product+Page&cat=79
    http://www.example.com/discuss/showthread.php?referrerid=249406&threadid=535913
    http://www.example.com/products/products.asp?N=200063&Ne=500955&ref=foo%2Cbar&Cn=Accessories.
  • Calendar issues. A dynamically generated calendar might generate links to future and previous dates with no restrictions on start of end dates. For example:
    http://www.example.com/calendar.php?d=13&m=8&y=2011
    http://www.example.com/calendar/cgi?2008&month=jan
  • Broken relative links. Broken relative links can often cause infinite spaces. Frequently, this problem arises because of repeated path elements. For example:
    http://www.example.com/index.shtml/discuss/category/school/061121/html/interview/
      category/health/070223/html/category/business/070302/html/category/community/070413/html/FAQ.htm
Steps to resolve this problem
To avoid potential problems with URL structure, we recommend the following:
  • Consider using a robots.txt file to block Googlebot's access to problematic URLs. Typically, you should consider blocking dynamic URLs, such as URLs that generate search results, or URLs that can create infinite spaces, such as calendars. Using regular expressions in your robots.txt file can allow you to easily block large numbers of URLs.
  • Wherever possible, avoid the use of session IDs in URLs. Consider using cookies instead. Check our Webmaster Guidelines for additional information.
  • Whenever possible, shorten URLs by trimming unnecessary parameters.
  • If your site has an infinite calendar, add a nofollow attribute to links to dynamically created future calendar pages.
  • Check your site for broken relative links.
Was this helpful?

Keep a simple URL structure

A site's URL structure should be as simple as possible. Consider organizing your content so that URLs are constructed logically and in a manner that is most intelligible to humans (when possible, readable words rather than long ID numbers). For example, if you're searching for information about aviation, a URL like http://en.wikipedia.org/wiki/Aviation will help you decide whether to click that link. A URL like http://www.example.com/index.php?id_sezione=360&sid=3a5ebc944f41daa6f849f730f1, is much less appealing to users.
Consider using punctuation in your URLs. The URL http://www.example.com/green-dress.html is much more useful to us than http://www.example.com/greendress.html. We recommend that you use hyphens (-) instead of underscores (_) in your URLs.
Overly complex URLs, especially those containing multiple parameters, can cause a problems for crawlers by creating unnecessarily high numbers of URLs that point to identical or similar content on your site. As a result, Googlebot may consume much more bandwidth than necessary, or may be unable to completely index all the content on your site.
Common causes of this problem
Unnecessarily high numbers of URLs can be caused by a number of issues. These include:
  • Additive filtering of a set of items Many sites provide different views of the same set of items or search results, often allowing the user to filter this set using defined criteria (for example: show me hotels on the beach). When filters can be combined in a additive manner (for example: hotels on the beach and with a fitness center), the number of URLs (views of data) in the sites explodes. Creating a large number of slightly different lists of hotels is redundant, because Googlebot needs to see only a small number of lists from which it can reach the page for each hotel. For example:
    • Hotel properties at "value rates":
      http://www.example.com/hotel-search-results.jsp?Ne=292&N=461
    • Hotel properties at "value rates" on the beach:
      http://www.example.com/hotel-search-results.jsp?Ne=292&N=461+4294967240
    • Hotel properties at "value rates" on the beach and with a fitness center:
      http://www.example.com/hotel-search-results.jsp?Ne=292&N=461+4294967240+4294967270
  • Dynamic generation of documents. This can result in small changes because of counters, timestamps, or advertisements.
  • Problematic parameters in the URL. Session IDs, for example, can create massive amounts of duplication and a greater number of URLs.
  • Sorting parameters. Some large shopping sites provide multiple ways to sort the same items, resulting in a much greater number of URLs. For example:
    http://www.example.com/results?search_type=search_videos&search_query=tpb&search_sort=relevance
       &search_category=25
  • Irrelevant parameters in the URL, such as referral parameters. For example:
    http://www.example.com/search/noheaders?click=6EE2BF1AF6A3D705D5561B7C3564D9C2&clickPage=
       OPD+Product+Page&cat=79
    http://www.example.com/discuss/showthread.php?referrerid=249406&threadid=535913
    http://www.example.com/products/products.asp?N=200063&Ne=500955&ref=foo%2Cbar&Cn=Accessories.
  • Calendar issues. A dynamically generated calendar might generate links to future and previous dates with no restrictions on start of end dates. For example:
    http://www.example.com/calendar.php?d=13&m=8&y=2011
    http://www.example.com/calendar/cgi?2008&month=jan
  • Broken relative links. Broken relative links can often cause infinite spaces. Frequently, this problem arises because of repeated path elements. For example:
    http://www.example.com/index.shtml/discuss/category/school/061121/html/interview/
      category/health/070223/html/category/business/070302/html/category/community/070413/html/FAQ.htm
Steps to resolve this problem
To avoid potential problems with URL structure, we recommend the following:
  • Consider using a robots.txt file to block Googlebot's access to problematic URLs. Typically, you should consider blocking dynamic URLs, such as URLs that generate search results, or URLs that can create infinite spaces, such as calendars. Using regular expressions in your robots.txt file can allow you to easily block large numbers of URLs.
  • Wherever possible, avoid the use of session IDs in URLs. Consider using cookies instead. Check our Webmaster Guidelines for additional information.
  • Whenever possible, shorten URLs by trimming unnecessary parameters.
  • If your site has an infinite calendar, add a nofollow attribute to links to dynamically created future calendar pages.
  • Check your site for broken relative links.
Was this helpful?

HTTPS Secure your site with HTTPS Protect your site and your users What is HTTPS?









Secure your site with HTTPS

Protect your site and your users

What is HTTPS?

HTTPS (Hypertext Transfer Protocol Secure) is an internet communication protocol that protects the integrity and confidentiality of data between the user's computer and the site. Users expect a secure and private online experience when using a website. We encourage you to adopt HTTPS in order to protect your users' connections to your website, regardless of the content on the site.
Data sent using HTTPS is secured via Transport Layer Security protocol (TLS), which provides three key layers of protection:
  1. Encryption—encrypting the exchanged data to keep it secure from eavesdroppers. That means that while the user is browsing a website, nobody can "listen" to their conversations, track their activities across multiple pages, or steal their information.
  2. Data integrity—data cannot be modified or corrupted during transfer, intentionally or otherwise, without being detected.
  3. Authentication—proves that your users communicate with the intended website. It protects against man-in-the-middle attacks and builds user trust, which translates into other business benefits.

Best practices when implementing HTTPS

Use robust security certificates

You must obtain a security certificate as a part of enabling HTTPS for your site. The certificate is issued by a certificate authority (CA), which takes steps to verify that your web address actually belongs to your organization, thus protecting your customers from man-in-the-middle attacks. When setting up your certificate, ensure a high level of security by choosing a 2048-bit key. If you already have a certificate with a weaker key (1024-bit), upgrade it to 2048 bits. When choosing your site certificate, keep in mind the following:
  • Get your certificate from a reliable CA that offers technical support.
  • Decide the kind of certificate you need:
    • Single certificate for single secure origin (e.g. www.example.com).
    • Multi-domain certificate for multiple well-known secure origins (e.g. www.example.com, cdn.example.com, example.co.uk).
    • Wildcard certificate for a secure origin with many dynamic subdomains (e.g. a.example.com, b.example.com).

Use server-side 301 redirects

Redirect your users and search engines to the HTTPS page or resource with server-side 301 HTTP redirects.

Verify that your HTTPS pages can be crawled and indexed by Google

  • Do not block your HTTPS pages by robots.txt files.
  • Do not include meta noindex tags in your HTTPS pages.
  • Use the URL Inspection tool to test whether Googlebot can access your pages.

Support HSTS

We recommend that HTTPS sites support HSTS (HTTP Strict Transport Security). HSTS tells the browser to request HTTPS pages automatically, even if the user enters http in the browser location bar. It also tells Google to serve secure URLs in the search results. All this minimizes the risk of serving unsecured content to your users.
To support HSTS, use a web server that supports it and enable the functionality.
Although it is more secure, HSTS adds complexity to your rollback strategy. We recommend enabling HSTS this way:
  1. Roll out your HTTPS pages without HSTS first.
  2. Start sending HSTS headers with a short max-age. Monitor your traffic both from users and other clients, and also dependents' performance, such as ads.
  3. Slowly increase the HSTS max-age.
  4. If HSTS doesn't affect your users and search engines negatively, you can, if you wish, ask your site to be added to the HSTS preload list used by most major browsers.

Consider using HSTS preloading

If you enable HSTS, you can optionally support HSTS preloading for extra security and improved performance. To enable preloading, you must visit hstspreload.org and follow the submission requirements for your site.

Avoid these common pitfalls

Throughout the process of making your site secure with TLS, avoid the following mistakes:
IssueAction
Expired certificatesMake sure your certificate is always up to date.
Certificate registered to incorrect website nameCheck that you have obtained a certificate for all host names that your site serves. For example, if your certificate only covers www.example.com, a visitor who loads your site using just example.com (without the "www." prefix) will be blocked by a certificate name mismatch error.
Missing Server name indication (SNI) supportMake sure your web server supports SNI and that your audience uses supported browsers, generally. While SNI is supported by all modern browsers, you'll need a dedicated IP if you need to support older browsers.
Crawling issuesDon't block your HTTPS site from crawling using robots.txt.
Indexing issuesAllow indexing of your pages by search engines where possible. Avoid the noindex meta tag.
Old protocol versionsOld protocol versions are vulnerable; make sure you have the latest and newest versions of TLS libraries and implement the newest protocol versions.
Mixed security elementsEmbed only HTTPS content on HTTPS pages.
Different content on HTTP and HTTPSMake sure the content on your HTTP site and your HTTPS is the same.
HTTP status code errors on HTTPSCheck that your website returns the correct HTTP status code. For instance 200 OK for accessible pages, or 404 or 410 for pages that do not exist.

More tips

See the HTTPS migration FAQs for more tips about using HTTPS pages on your site.

Migrating from HTTP to HTTPS

If you migrate your site from HTTP to HTTPS, Google treats this simply as a site move with URL changes. This can temporarily affect some of your traffic numbers. See the site move overview page to learn more.
Add the new HTTPS property to Search Console: Search Console treats HTTP and HTTPS separately: data is not shared between properties in Search Console.
See the troubleshooting page for sitemap moves to troubleshooting problems with your migration.

More information

More details on implementing TLS on your site:
Was this helpful?

All post it's also important for you show know

Visit lumbini Park Nepal Garden #short #shorts #vlog #vlogs #ytshorts

via https://youtu.be/7ff47IhW4nQ