Terminology used for building Microsoft Azure Virtual Networks

1. IP addresses:

There are two types of IP addresses assigned to resources in Azure: public and private.

Public IP Addresses allow Azure resources to communicate with Internet and other Azure public-facing services like Azure Redis Cache.

Private IP Addresses allows communication between resources in a virtual network, along with those connected through a VPN, without using an Internet-routable IP addresses.

Preferred IP Series for Intranets:

Small Network1: 192.168.0.X – for 28 Systems – IP Address Range = (Only last byte changes)

Small Network2: 192.168.1.X –for 28 Systems – IP Address Range = (Only last byte changes)

Large Network: 172.16.X.X – for 216 Systems – IP Address Range = https://www.linkedin.com/redir/invalid-link-page?url=172%2e16%2e0%2e0%2F16 (last 2 bytes change)

Very Large Network: 10.X.X.X – for 224 Systems – IP Address Range = https://www.linkedin.com/redir/invalid-link-page?url=10%2e0%2e0%2e0%2F8 (last 3 bytes change)

Classless Inter-Domain Routing (CIDR) notation is a compact representation of an IP address and its associated routing prefix. The notation is constructed from an IP address, a slash (‘/’) character, and a decimal number. The number is the count of leading 1 bits in the routing mask, traditionally called the network mask.

Public IP Addresses

There are two methods in which an IP address is allocated to a public IP resource – dynamic or static.

  1. o In the dynamic allocation method the IP address is not allocated at the time of its creation. Instead, the public IP address is allocated when you start (or create) the associated resource (like a VM or load balancer). The IP address is released when you stop (or delete) the resource. This means the IP address can change.
  2. o In the static allocation method the IP address for the associated resource does not change. In this case an IP address is assigned immediately. It is released only when you delete the resource or change its allocation method to dynamic.

Public IP addresses allow Azure resources to communicate with Internet and Azure public-facing services such as Azure Redis Cache, Azure Event Hubs, SQL databases and Azure storage.

In Azure Resource Manager, a public IP address is a resource that has its own properties. You can associate a public IP address resource with any of the following resources:

    • Virtual machines (VM)
    • Internet-facing load balancers
    • VPN gateways
    • Application gateways

Note: The first 5 “static” public IP addresses in a region are free. This is applicable irrespective of the type of resource (VM or Load-balancer) to which the IP address is associated. All others are charged at $0.004/hr.

Private IP Addresses

  • IP address is allocated from the address range of the subnet to which the resource is attached.
  • The default allocation method is dynamic, where the IP address is automatically allocated from the resource’s subnet (using DHCP). This IP address can change when you stop and start the resource.
  • You can set the allocation method to static to ensure the IP address remains the same. In this case, you also need to provide a valid IP address that is part of the resource’s subnet.
  • Private IP addresses allow Azure resources to communicate with other resources in a virtual network or an on-premises network through a VPN gateway or ExpressRoute circuit, without using an Internet-reachable IP address.
  • In the Azure Resource Manager deployment model, a private IP address is associated to the following types of Azure resources:
    • VMs
    • Internal load balancers (ILBs)
    • Application gateways


Subnet is a range of IP addresses in the VNet, you can divide a VNet into multiple subnets for organization and security. VMs and PaaS role instances deployed to subnets (same or different) within a VNet can communicate with each other without any extra configuration. You can also configure route tables and NSGs to a subnet.

Based on number of system in a network, Subnet Mask is set.

https://www.linkedin.com/redir/invalid-link-page?url=255%2e255%2e255%2e0 – 28 Systems

https://www.linkedin.com/redir/invalid-link-page?url=255%2e255%2e0%2e0 – 216 Systems

https://www.linkedin.com/redir/invalid-link-page?url=255%2e0%2e0%2e0 – 224 Systems

3.Network Interface Card (NIC):

VMs communicate with other VMs and other resources on the network by using virtual network interface card (NIC). Virtual NICs configure VMs with private and optional public IP address. VMs can have more than one NIC for different network configurations.

Note: VMs can have more than one NIC adapter that links the VM with the virtual network. The number of NICs you can attach to a VM depends on its size. For example, a VM that is based on a D2 size can have 2 NICs, and a D4-based VM can have a maximum of 8 NICs. Multiple NICs configuration is common for virtual appliances that provide additional control of traffic in virtual networks.

4.Network Security Group (NSG):

You can create NSGs to control inbound and outbound access to network interfaces (NICs), VMs, and subnets. Each NSG contains one or more rules specifying whether or not traffic is approved or denied based on source IP address, source port, destination IP address, and destination port.

Some important things to keep in mind while implementing network security groups include:

  • By default you can create 100 NSGs per region per subscription. You can raise this limit to 400 by contacting Azure support.
  • You can apply only one NSG to a VNet, subnet, or NIC.
  • By default, you can have up to 200 rules in a single NSG. You can raise this limit to 500 by contacting Azure support.
  • You can apply an NSG to multiple resources.

5.Azure load balancers:

The Azure Load Balancer delivers high availability and network performance to your applications. It is a Layer 4 (TCP, UDP) load balancer that distributes incoming traffic among healthy service instances in cloud services or virtual machines defined in a load-balanced set.

6. Application Gateways:

Azure Application Gateway is a layer-7 load balancer. It provides failover, performance-routing HTTP requests between different servers, whether they are on the cloud or on-premises. Application Gateway provides many Application Delivery Controller (ADC) features including HTTP load balancing, cookie-based session affinity, Secure Sockets Layer (SSL) offload, custom health probes, support for multi-site, and many others.

7. Traffic Manager:

Microsoft Azure Traffic Manager allows you to control the distribution of user traffic for service endpoints in different datacenters. Service endpoints supported by Traffic Manager include Azure VMs, Web Apps, and cloud services. You can also use Traffic Manager with external, non-Azure endpoints.

Traffic Manager uses the Domain Name System (DNS) to direct client requests to the most appropriate endpoint

8. VPN Gateways:

It is used to connect an Azure virtual network (VNet) to other Azure VNets or to an on-premises network. You need to assign a public IP address to its IP configuration to enable it to communicate with the remote network. Currently, you can only assign a dynamic public IP address to a VPN gateway.

9. Azure DNS:

The Domain Name System (DNS) enables clients to resolve user-friendly fully qualified domain names (FQDNs), such as www.adatum.com, to IP addresses. Azure Domain Name System (DNS) allows you to host your domains with your Azure apps. By hosting your domains in Azure, you can manage your DNS records by using your existing Azure subscription.

Traffic Manager in Azure App services

Traffic Manager

  • Traffic Manager is used for applications that need to scale beyond the capacity of a single deployment or whose users are globally dispersed.
  • Deploying your web app to multiple regions (or data-centers) is a scale-out strategy that can be used to achieve massive sociability for your web app.
  • Assume, for example, that you have a web app deployment in the Central US region. If your users are dispersed around the world, then you may choose to deploy to the West US, East US, and North Europe regions as well. Doing so will significantly increase capacity of your web app.
  • The challenge of routing users to one of many web app deployments can be met by using Azure Traffic Manager. This is a networking service that can be used to achieve global scale for your web apps by allowing you to control how user traffic is routed to multiple deployments of your application

To use Traffic Manager, you first must create a Traffic Manager profile, which identifies a unique DNS name for the profile in the domain, a list of (web app deployments) that you specify, and which can be one of the following:

  1. Performance: Select this method if your end points are deployed in different geographical locations and you want to use the one with lowest latency (closer to the user).
  2. Priority: Select this when you want to select an endpoint which has highest priority and is available.
  3. Weighted: Select this method when you want to distribute traffic across a set of endpoints as per the weight provided

How Traffic Manager works:

When you configure a Traffic Manager profile, the settings that you specify provide Traffic Manager with the information needed to determine which endpoint should service the request based on a DNS query. No actual endpoint traffic routes through Traffic Manager.

  1. User traffic to company domain name: The client requests information using the company domain name. The goal is to resolve a DNS name to an IP address. Company domains must be reserved through normal Internet domain name registrations that are maintained outside of Traffic Manager. In Figure 1, the example company domain is www.contoso.com.
  2. Company domain name to Traffic Manager domain name: The DNS resource record for the company domain points to a Traffic Manager domain name maintained in Azure Traffic Manager. This is achieved by using a CNAME resource record that maps the company domain name to the Traffic Manager domain name. In the example, the Traffic Manager domain name is contoso.trafficmanager.net.
  3. Traffic Manager Domain name and profile: The Traffic Manager domain name is part of the Traffic Manager profile. The user’s DNS server sends a new DNS query for the Traffic Manager domain name (in our example, contoso.trafficmanager.net), which is received by the Traffic Manager DNS name servers.
  4. Traffic Manager Profile rules processed: Traffic Manager uses the specified traffic routing method and monitoring status to determine which Azure or other endpoint should service the request.
  5. Endpoint domain name sent to user: Traffic Manager returns a CNAME record that maps the Traffic Manager domain name to the domain name of the endpoint. The user’s DNS server resolves the endpoint domain name to its IP address and sends it to the user.
  6. User calls the endpoint: The user calls the returned endpoint directly using its IP address.

Note: Since the company domain and resolved IP address are cached on the client machine, the user continues to interact with the chosen endpoint until its local DNS cache entry expires. It is important to note that the DNS client caches DNS host entries for the duration of their Time-to-Live (TTL). Retrieving host entries from the DNS client cache bypasses the Traffic Manager profile and you could experience connection delays if the endpoint becomes unavailable before the TTL expires.

To implement Traffic Manager

  1. Deploy the Web Apps in different Geographical locations
  2. Browse à Traffic Manager profiles –> Add
  3. Set Name=Demo, Routing Method = Weighted –> Create
  4. Go to Traffic Manger –> Settings –> End Points –> Add
  5. Type = Azure EndPoint, Name=WebApp1EP, Target Resource Type = App Service, Choose an App Service, Weight = 1 –> OK
  6. Repeat step 5 for every Web App deployment.

Scaling a Web App in Azure Web App

Scaling a Web App

  • Whether your application needs to handle a few hundred requests per day or a few million requests per day,
    the Azure Web Apps scalability features provide ways for you to deliver the right level of scale in a robust,
    cost-effective manner.
  • When you consider the scalability requirements of an application, you should look at its resource requirements
    vertically (scaling up) horizontally (scaling out).
  • You typically choose to scale up when any single request demands more memory and processing power to complete,
    and the bottleneck / latency in the system is the intensive number of software objects created in the computer’s
    memory or the intensive algorithms and business logic that is performed. When you scale up a web app, you increase
    the resource capacity, such as RAM and CPU cores, of the virtual machine on which your web app is running.
  • You typically scale out when any single request requires less memory and processing power to complete, but
    the real bottleneck / latency is in network communication, disk access, etc. In this case, the key to completing
    each request more efficiently is to run it in parallel to other requests as each wait on external components to
    complete. To scale out a web app, you increase the number of virtual machine instances on which your web app is
    running. For the properly architected app, this means your web app can handle more load and therefore service
    more user requests.

Scale Up the Azure Web App:

  • The ability to scale up a web app exists only for web apps configured for Basic, Standard, or Premium pricing
  • The scale settings take only seconds to apply and affect all web apps in your App Service plan.
    They do not require your code to be changed or your applications to be redeployed.

To Scale Up:

  1. App Services –> Select App Service –> Settings –> Change App Service Plan
    (In App Service Plan) –> Select / Create New Plan
  2. Select the Pricing tier based on following options:
    1. Number of Cores
    2. RAM
    3. Storage
    4. Slots (Number of CPU Instances)
    5. Backup frequency
    6. Traffic Manager facility

To Scale Out:

The number of Virtual Machine Instances you can scale out is limited by the pricing tier configured for your web app.

  1. App Services –> Select App Service –> Settings –> App Service Plan
  2. Select Scale Out (App Service Plan) to configure settings
    1. Scale by: Manual – Manual setup means that the number of instances you choose won’t change,
      even if there are changes in load.
    2. Scale by: CPU percentage: Automatically scale based on CPU Percentage used. You can choose
      an average value you want to target.
    3. Scale by: Schedule and Performance Rules – Create your own set of rules. Create a schedule
      that adjusts your instance counts based on time and performance metrics.

Auto scale based on CPU percentage:

  • The Target range setting defines the minimum and maximum CPU percentage to target.
  • As long as the CPU percentage is within this range, Autoscale will not increase or decrease the number
    of instances.
  • When the CPU percentage exceeds the maximum CPU percentage you specify, Autoscale will add an instance.
    If CPU percentage continues to exceed the maximum CPU specified, then Autoscale will add another instance.
  • At no point will you have more than the maximum number of instances specified in the Instances setting.
  • Similarly, when CPU percentage falls below the minimum CPU percentage you specify, Autoscale will remove
    an instance. If CPU percentage continues to all below the minimum CPU percentage specified, then Autoscale
    will remove another instance. At no point will you have fewer than the minimum number of instances specified
    in the Instances setting

Note: The CPU percentage is measured as an average across all instances. For
example, if you have two instances, one of which is running at 50 percent CPU and the other of which is running
at 100 percent CPU, then the CPU percentage would be 75 percent for all the instances at that point in time

Auto scale based on a recurring schedule:

This can be particularly useful when demand for your web app is predictable. For example, if your web app provides
services for an industry where most work is done Monday through Friday, then you could configure Autoscale to
increase the number of instances during the week to support peak demand and decrease the number of instances
on weekends when demand is very light.

Azure Scaling Web App