Introduction to Cloud Load Balancing, DNS And CDN Ans connecting networks to google cloud vpc #GCPday7

Intro To load Balancing

We have seen how virtual machines can autoscale to respond to changing loads.

let us now see how exactly a customer get to an application of a firm when it might be provided by 4 vm at a moment and 10 at another. To achieve this load balancer is used.

Load Balancer:- The job of a load balancer is to distribute user traffic across multiple instances of an application. By spreading the load, load balancing reduces the risk that applications experience performance issues.

\>>>Cloud Load Balancing is a fully distributed, software-defined, managed service for all your traffic. And because the load balancers don’t run in VMs that you have to manage, you don’t have to worry about scaling or managing them.

\>>>You can put Cloud Load Balancing in front of all of your traffic: HTTP or HTTPS, other TCP and SSL traffic, and UDP traffic too.

\>>>Cloud Load Balancing provides cross-region load balancing, including automatic multi-region failover, which gently moves traffic in fractions if backends become unhealthy.

\>>>Cloud Load Balancing reacts quickly to changes in users, traffic, network, backend health, and other related conditions.

\>>> It is very well maintained so that if there is a huge spike in demand we need not file a support ticket or inform Google of incoming load, No so-called “pre-warming” is required.

\>>>VPC offers a suite of load-balancing options: If you need cross-regional load balancing for a web application, use Global HTTP(S) load balancing. For Secure Sockets Layer traffic that is not HTTP, use the Global SSL Proxy load balancer. If it’s other TCP traffic that doesn’t use SSL, use the Global TCP Proxy load balancer.

\>>>Those last two proxy services only work for specific port numbers, and they only work for TCP.

\>>>If you want to load balance UDP traffic, or traffic on any port number, you can still load balance across a Google Cloud region with the Regional load balancer.

\>>>Finally, what all those services have in common is that they’re intended for traffic coming into the Google network from the internet.

\>>>But what if you want to load balance traffic inside your project, say, between the presentation layer and the business layer of your application?

\>>>For that, use the Regional internal load balancer.It accepts traffic on a Google Cloud internal IP address and load balances it across Compute Engine VMs.

Cloud DNS And Cloud CDN

\>>>One of the most famous free Google services is 8.8.8.8, which provides a public Domain Name Service to the world.

\>>>DNS is what translates internet hostnames to addresses, and as you might imagine, Google has a highly developed DNS infrastructure. It makes 8.8.8.8 available so that everyone can take advantage of it.

Internet hostnames and addresses of applications built in Google Cloud

\>>>Google Cloud offers Cloud DNS to help the world find them. It’s a managed DNS service that runs on the same infrastructure as Google.It has low latency and high availability, and it’s a cost-effective way to make your applications and services available to your users.

\>>>The DNS information you publish is served from redundant locations around the world.

\>>>Cloud DNS is also programmable. You can publish and manage millions of DNS zones and records using the Cloud Console, the command-line interface, or the API.

\>>>Google also has a global system of edge caches. Edge caching refers to the use of caching servers to store content closer to end users.

\>>>You can use this system to accelerate content delivery in your application by using Cloud CDN - Content Delivery Network.

\>>>This means your customers will experience lower network latency, the origins of your content will experience reduced load, and you can even save money.

\>>>After HTTP(S) Load Balancing is set up, Cloud CDN can be enabled with a single checkbox.

Connecting Networks to Google Cloud VPC

Many Google Cloud customers want to connect their Google Virtual Private Clouds to other networks in their system, such as on-premises networks or networks in other clouds. There are several effective ways to accomplish this.

\>>>One option is to start with a Virtual Private Network connection over the internet and use the IPsec VPN protocol to create a “tunnel” connection. To make the connection dynamic, a Google Cloud feature called Cloud Router can be used.

\>>>Cloud Router lets other networks and Google VPC, exchange route information over the VPN using the Border Gateway Protocol. Using this method, if you add a new subnet to your Google VPC, your on-premises network will automatically get routes to it. But using the internet to connect networks isn't always the best option for everyone, either because of security concerns or because of bandwidth reliability.

\>>>Second option is to consider “peering” with Google using Direct Peering. Peering means putting a router in the same public data center as a Google point of presence and using it to exchange traffic between networks.

\>>> Google has more than 100 points of presence around the world. Customers who aren’t already in a point of presence can work with a partner in the Carrier Peering program to get connected.

\>>>Carrier peering gives you direct access from your on-premises network through a service provider's network to Google Workspace and to Google Cloud products that can be exposed through one or more public IP addresses.

\>>>One downside of peering, though, is that it isn’t covered by a Google Service Level Agreement.

\>>> If getting the highest uptimes for interconnection is important, using Dedicated Interconnect would be a good solution. This option allows for one or more direct, private connections to Google.

\>>>If these connections have topologies that meet Google’s specifications, they can be covered by an SLA of up to 99.99%. Also, these connections can be backed up by a VPN for even greater reliability.

\>>> The final option we’ll explore is Partner Interconnect, which provides connectivity between an on-premises network and a VPC network through a supported service provider.

\>>>A Partner Interconnect connection is useful if a data center is in a physical location that can't reach a Dedicated Interconnect colocation facility, or if the data needs don’t warrant an entire 10 GigaBytes per second connection.

\>>>Depending on availability needs, Partner Interconnect can be configured to support mission-critical services or applications that can tolerate some downtime. As with Dedicated Interconnect, if these connections have topologies that meet Google’s specifications, they can be covered by an SLA of up to 99.99%, but note that Google isn’t responsible for any aspects of Partner Interconnect provided by the third-party service provider, nor any issues outside of Google's network.