DNS and DHCP Best Practices: Architectures that Work

DNS and DHCP Best Practices: Architectures that Work DNS and DHCP Best Practices: Architectures that Work Introduction DNS and DHCP network archite...
Author: Amice Ward
25 downloads 0 Views 661KB Size
DNS and DHCP Best Practices: Architectures that Work

DNS and DHCP Best Practices: Architectures that Work

Introduction DNS and DHCP network architectures can range from simple to extremely complex. Regardless of the complexity, a good network design eliminates single points of failure, steels the network from attacks, meets the performance requirements of the business, and provides for business agility. Business agility is the number one pressure on network teams and IT in general. Cloud, SDN, and other initiatives are stressing the ability of network teams to manage provisioning and operations. Creating a DNS & DHCP architecture that provides resilient and flexible services where the business needs them is a key requirements for network agility. This white paper examines some of the design options available to network architects. It includes a number of recommended industry best practices that help ensure DNS and DHCP environments are reliable, secure, and manageable. While budget constraints can affect design decisions, security, performance, availability, and adaptability are fundamental design objectives for every network. The DNS and DHCP best practices outlined in this paper are intended to help network architects achieve these fundamental goals. These recommendations are based on BlueCat’s extensive experience in helping organizations implement core services architectures that work across a broad spectrum of topologies.

www.bluecatnetworks.com

2

DNS and DHCP Best Practices: Architectures that Work

For external DNS: •

Configure the external primary DNS server as a hidden master. This configuration protects the primary server, provides maximum performance, and increases tolerance to failure. Where possible, deploy primary servers in high availability clusters.



Deploy secondary servers in geographically-dispersed data centers to avoid a single point of failure scenario and to bolster protection against DDOS attacks.



Placing secondary servers within the network’s demilitarized zone (DMZ) minimizes the types of data traffic to which they are exposed, affording greater security.



Secure zone transfers using access control lists (ACLs) and transaction signatures (TSIGs). These security measures secure the source and destinations of zone transfers from being spoofed.



Disable recursion on external servers to eliminate the risk of cache positioning and to protect against their use in DNS attacks as open resolvers.



On Unix and Linux based systems, run DNS in a “jailed” environment to sandbox the processes and thereby minimize any damage possible through any future potential exploits.



Hide information that indicates the version of DNS server software deployed. This information benefits attackers who can exploit any known vulnerabilities.



External DNS servers require multi-layered protection against DDOS attacks. Such layering should include cloud-hosted, upstream ISP, and datacenter DDOS protections that can stop the majority of attack traffic from reaching the DNS server. Host-based DDOS protections alone are inadequate as they do not prevent other services from being impacted.

For internal DNS:

www.bluecatnetworks.com



Locate internal DNS servers on the internal network, behind firewalls.



Use virtual private networks (VPNs) to connect remote users to internal resources.



To enhance performance and reliability, consider using a hidden master for the internal primary DNS server.



Where possible, deploy secondary servers at local sites locally to preserve network bandwidth. An analysis of bandwidth requirements – the frequency DNS queries on the local WAN link – can help determine whether small sites warrant secondary servers.



As alternatives to secondary servers, consider stealth secondary servers or caching-only servers for small sites. These require less network bandwidth.



The size and complexity of the internal DNS affects your design decisions. Consider deploying internal root servers for large, distributed networks or those with complex namespaces. Internal root servers can enhance scalability, efficiency and control.

3

DNS and DHCP Best Practices: Architectures that Work

For caching servers: •

Separate caching services from authoritative services to improve performance and security.



Use forwarders to build a centralized cache, which improves performance.



Consider using a dedicated internal caching layer that utilizes multiple Anycast pools of servers.

This approach simplifies DNS client configuration greatly, ensures optimal reliability of the DNS services, and reduces the load on authoritative secondary servers. For DHCP: •

The number of DHCP servers you deploy depends greatly on the requirements of your organization. Carefully plan your DHCP deployment to ensure maximum reliability and scalability.



To ensure service availability and eliminate single points of failure, deploy DHCP servers in redundant, failover configurations using DHCP Failover.



Separate DHCP configurations for different types of networks onto their own DHCP Servers. For example, separate servers for wireless, wired, and guest networks. This separation allows for simpler troubleshooting when issues arise.



Deploy enough DHCP servers to create acceptable fault domains so that a large number of client devices are not impacted by the failure of a DHCP server or its related network.

Adopting these best practices will help your organization dramatically reduce the risk of service outages and enhance overall network security.

DNS DNS is a critical service for every organization. Without it, necessary business applications cannot function and users cannot reach them. However, many organizations have not taken the time and care to properly deploy DNS within their network. An improper DNS deployment can leave an organization susceptible to DNS outages, failures and security risks. Organizations need to seriously examine their current DNS deployments and take the necessary steps to ensure that their systems are fault tolerant, reliable and secure. At its most basic, a DNS deployment requires a minimum of two name servers – one server hosts the primary (also known as master) copies of the zones, and the other hosts the secondary (or slave) copies. The primary zones are writeable and all DNS updates are made to them. Secondary zones are read- only and exist to provide redundancy and to also relieve some of the load from the

www.bluecatnetworks.com

4

DNS and DHCP Best Practices: Architectures that Work

primary. Secondary zones are updated by the primary through a mechanism known as zone transfers. DNS is required on the internal side of the network to provide users and systems with access to resources by name. Services such as Active Directory rely on DNS and cannot function without it. On the external side of the network, DNS is perhaps the most frequently used network service on the Internet. Every time users connect to Web sites or send email messages they are using external DNS. External DNS An organization’s external DNS server provides the rest of the world with access to the corporate Web site, email services and any number of externalfacing applications. Since external DNS servers face the Internet, they have the highest exposure to attacks. An external DNS architecture usually consists of a small number of servers, typically a master and several slaves. These servers must be well protected and hardened to reduce the attack surface and to ensure service availability. The following sections describe some of the best practices for primary and secondary external DNS servers.

Tip: While secondary zones can continue to function in the event of a primary zone failure, it is important to ensure that if the primary sever fails, it it back online before the zone’s Start of Authority (SOA) expiry time elapses. Setting this value to the recommended duration of 2 to 4 weeks should allow adequate time to correct the problem.

1. Primary Server - Hidden Master Configuration Because all changes go through the primary zone, it must be protected from both attack and failure. BlueCat recommends running the primary DNS server as a hidden master. A hidden master configuration allows you to remove the server from the exposed network and place it behind the internal firewall on the trusted side of the network. Internet users connect to publicized secondary servers which provide DNS resolution for the organization. This secures the primary DNS server from attack, while allowing the server to update the secondary DNS servers that are serving DNS records. A hidden master gets its name from the fact that its name server (NS) resource record – the record that identifies it as a DNS server – is not listed in the zone (or as a delegation record on the zone’s parent servers). This renders it essentially invisible to the public. Because you cannot attack what you cannot see, hidden masters are better protected. Aside from the obvious security benefits, a hidden master configuration pays dividends in performance as well. Free from the need to respond to external queries, the primary server can focus on zone maintenance, such as notifying secondary servers of changes and responding to zone transfer requests. The hidden master also increases tolerance to failure. As crucial as the primary server is, if it were to fail, there would be no immediate service disruption. Resolvers on the Internet, unaware of the primary server as a source of name resolution, would continue to query the secondary servers. The presence of secondary servers notwithstanding, BlueCat recommends placing hidden masters in redundant configurations, such as high- availability clusters, to eliminate single point of failure scenarios. Redundancy ensures service availability in the event of a hardware and service failure within the hidden master server.

www.bluecatnetworks.com

5

DNS and DHCP Best Practices: Architectures that Work

Internal

XHA

Hidden Master Cluster Internal Firewall

DMZ

Placed at each major datacenter Secondary DNS

Secondary DNS

Secondary DNS

External Firewall

Internet

Client Requests

2. Multiple Secondary Servers The DNS specification and domain name registration rules require a minimum of two servers for every external authoritative domain. Although a minimal DNS architecture is comprised of a primary and single secondary server, a hidden master configuration requires at least two secondary servers. To ensure external DNS is as reliable as possible, consider using three secondary servers. With an additional backup, if one secondary server goes down for any reason, the external DNS is still redundant, and not ‘a single failure away’ from unavailability. BlueCat recommends four or five secondary servers where an extremely high level of traffic is anticipated and greater reliability is required. Placing five secondary servers in one location will not help if a network, hardware outageClients or natural disaster brings the subnet or data center down and stops DNS. Locate servers in geographically-dispersed data centers with reliable, redundant networks. Consider placing at least one secondary Internal DNS Server server in a co-location facility, where it can take advantage of redundant connections to the Internet. Where it is necessary to place multiple servers in the same data center, place each server on an individual subnet, behind different switches and routers.

www.bluecatnetworks.com

6 Replication

Forw Recurs

DNS and DHCP Best Practices: Architectures that Work

3. DMZs Wherever possible, avoid placing external DNS servers directly on the Internet without the protection of a firewall. Placing secondary servers behind firewalls in a DMZ minimizes the types of data traffic to which they are exposed, affording greater protection from attack. Traffic to and from DNS name servers should be filtered to allow only DNS traffic (UDP and TCP port 53) from external servers. This limits the exposure of the server to only the DNS service. 4. Restricted Zone Transfers Restricting zone transfers is one of the easiest and most effective ways to secure external DNS. As discussed earlier, zone transfers are used to update secondary servers with changes to zone data. Because zone transfers contain the names and IP addresses of network devices, you should ensure that only secondary servers are allowed to request and receive them. Allowing zone transfers to any host increases DNS vulnerability significantly: •

Zone transfers provide potential attackers with a ‘map’ of the external network. (This weakness worsens considerably if the internal system is accidentally advertised in external zones.)



Attackers can use a script to repeatedly request zone transfers from a targeted server. Such denial of service (DoS) attacks tie up both server resources and network bandwidth, rendering servers unavailable.

Restricting zone transfers to explicitly authorized hosts minimizes these risks. Use access controls to restrict zone transfers to only secondary servers. A transfer request by any other host – that does not have its IP address stipulated in the access list – is refused. In addition, make sure the firewall rules in the DMZ are set to block any attempt to ‘spoof’ internal IP addresses. While restricting zone transfers to IP addresses is good, using transaction signatures (TSIGs) is better. TSIGs bring the power of cryptography to zone transfers. Each server (both primary and secondary) shares a copy of a symmetric, cryptographic TSIG key (shared secret). Every transaction (notification and zone transfer) is run through a hashing algorithm, which produces an output called a digest. The digest is then signed with the TSIG key. TSIG provides mutual authentication in that each server in the transaction must identify itself to the other. Only holders of the TSIG shared secret are allowed to request zone transfers, making it nearly impossible for an attacker to spoof the identity of a secondary server. TSIGs also ensure the integrity of transactions. Hash digests provide a means for either server to determine if transaction data has been modified en route.

www.bluecatnetworks.com

7

DNS and DHCP Best Practices: Architectures that Work

Though many DNS administrators agree that transaction signatures greatly enhance zone security, they are not always implemented due to their complexity and the challenges associated with getting the key transferred securely between the authorized servers. If the shared key is compromised, security is lost. Ensuring that TSIG keys are securely transferred can be made easier through the deployment of an appliancebased DNS solution, which automates the configuration and secure transfer of the TSIG key to servers. 5. Recursive Queries A DNS server that accepts recursive queries looks to other DNS servers, often starting at the Internet Root servers, if it cannot answer the query itself. Recursive queries are an essential part of DNS and must be allowed on some internal DNS servers, so that internal users can resolve names on the Internet and on different parts of the organization’s internal network. Recursion is not required on an external DNS server andshould be disabled. An external server that responds to recursive queries is vulnerable to cache poisoning attacks, such as the vulnerability discovered by Dan Kaminsky in mid-2008. Cache poisoning occurs when an attacker feeds a DNS server with false data records before the authentic answer is returned. The false records often direct unsuspecting DNS clients to a site of the attacker’s choosing. Recent versions of DNS software include measures to combat cache poisoning. Responding to recursive queries is also performance intensive, much more so than answering queries directly from a local zone. A recursive query generally requires that the DNS server contact multiple servers, one at a time. Too many recursive queries lead to significant performance degradation. It follows that an external DNS server responding to recursive queries is vulnerable to denial of service attacks. An attacker who is able to submit a large enough number of recursive queries can bring the server to its knees. BlueCat strongly recommends that administrators disable recursion on external servers. If this is not possible, recursive queries should be restricted to trusted, internal clients.

www.bluecatnetworks.com

8

DNS and DHCP Best Practices: Architectures that Work

6. Further Considerations One of the most fundamental security principles is defense in depth – the more layers of protection you add, the more secure your infrastructure will be. Consider the following additional recommendations when designing DNS networks: •

Ensure that you are running the latest version of DNS software. Earlier versions of DNS software have inherent security vulnerabilities that attackers can exploit. Running the latest version of DNS software helps safeguard DNS against known attacks and destructive exploits.



Run the DNS software in a jailed environment. This strategy includes running the DNS service as a limited user rather than as root. In the event that an attacker is somehow able to compromise DNS, the attacker is ‘sandboxed’ within a restricted directory structure and is prevented from accessing the rest of the server. This restricts or jails the attackers and ensures that they do not have access to a shell prompt or the rest of the system.



Hide the DNS software version. If attackers can determine the version of DNS software in operation, they have additional information with which to plan attacks and exploit known vulnerabilities.

Internal DNS Internal DNS gives clients access to both internal resources and those on the Internet by name. Many of the strategies used to secure and manage external DNS are applicable to internal DNS. However, internal DNS servers should always be located on the internal network, behind a firewall. It is essential that internal resources cannot be accessed from the outside without additional safeguards. Use virtual private networks (VPNs) to provide trusted users outside the network with access to internal resources. Primary Server Consider using a hidden master on the internal network as well. Performance and fault tolerance are the drivers here. A hidden master does not resolve internal queries, and can be left to manage zone transfers and accept dynamic DNS updates. Should the hidden master fail for any reason, queries continue to be resolved without disruption as internal clients are configured to use the secondary servers. Where possible, the primary server should be part of a high availability cluster to protect against hardware and service failures.

www.bluecatnetworks.com

9

DNS and DHCP Best Practices: Architectures that Work

Secondary Servers Determining the number and placement of secondary servers is a critical design consideration. One of the more important decisions is whether each site should have a local secondary server (or servers). Factors to consider include the number of users located at each site and the speed and quality of network services to the site. Users located at a site without a local authoritative DNS server must contact a remote location to resolve internal names. When a site has a large number of users, remote resolution can strain local WAN links. A forecast of query frequency (i.e. the number of queries) monitored over time can help determine whether traffic at a small site is increasing and requires a local name server. As a general rule, all sites should have one or more local secondary servers with small branch offices being the possible exception. When a site requires a local secondary server, and the network connection is slow, consider implementing a stealth slave. Like a hidden master, the stealth slave does not have its name server records published in the zone. Although zone transfers are still required to keep the stealth secondary server up-todate, the server is not queried which helps with bandwidth concerns. A hidden master can also be used as a potential backup for the hidden master. Another option is to deploy caching-only servers instead of secondary servers at small and remote offices. A caching-only server does not host any zones. Used effectively, a caching-only server accepts queries for both Internet and other internal resources. Queries for internal names are forwarded to internal authoritative name servers, the results of which are cached. Caching servers do not request zone transfers, and conserve bandwidth. DDNS Dynamic DNS (DDNS) is used on internal networks to allow DNS clients and DHCP servers to dynamically update DNS with forward and reverse address mappings. A DDNS-capable client dynamically updates its hostname and IP address in DNS. You can either configure the client to register its host records directly with a DNS server, or configure a DHCP server to forward records to the DNS server on behalf of the client. In either case, the DNS server that needs to be updated is determined using Start of Authority (SOA) records. Because secondary servers are read-only, only primary DNS servers can accept dynamic updates. As a result, care must be taken when configuring a hidden master, to ensure that the address of the primary server is listed as the server to be updated in the Start of Authority record. In order to control DDNS updates, BlueCat recommends using a DHCP server to register DNS records on behalf of clients. This removes the need for client systems to update DNS directly and helps to secure DNS by limiting the number of systems that can update the DNS server. Dynamic DNS greatly eases administration because it eliminates the need to manually enter large numbers of records. Given the rise in DHCP-enabled devices, such as Voice over IP (VoIP) phones, wireless devices, Radio Frequency Identification (RFID) and other devices, DDNS is almost always a necessity for networks to operate properly.

www.bluecatnetworks.com

10

DNS and DHCP Best Practices: Architectures that Work

Internal Root Servers Organizations with very large, distributed networks and those with complex namespaces can benefit from deploying internal root servers. Internal root servers perform the same function for the internal network that the Internet root servers perform for the Internet. Internal DNS servers are configured with the IP addresses of the internal root servers and any queries that local DNS servers cannot answer are forwarded to the root servers. These queries are then delegated to the appropriate name server within the organization. Setting up internal root servers offers several advantages: •

Scalability – If the entire Internet can be supported with only 13 root servers, internal root servers can easily meet the architecture scalability requirements of any organization.



Confidentiality of Internal Information – Internal lookups remain in the internal namespace and do not escape to the Internet even if the request exists on another server in a geographically distant location. No requests for internal names will ever leave the organization.



Efficient Lookups – Forwarding queries in a very large, distributed DNS network can be complicated. Root servers simplify the design – you know where the destination of every query will go.

It is worth noting that root servers need not be dedicated. Any authoritative DNS server can host root zones. Of course, you should set at least two such DNS servers as root servers to eliminate a single point of failure. When using Internet root servers, you must consider how internal clients will be able to access the Internet (assuming this is desirable). You can enable Internet access in a number of ways: •

Configure internal root servers to forward queries to external servers for resolution. Any query for which the internal root servers are not authoritative can be forwarded (although this option can present problems if the internal root server is authoritative for top-level domains that also exist on the Internet).



Configure internal clients so that they can query different DNS servers, depending on whether the request is internal or external. To use this option, clients must be proxy-capable and either support software proxies or proxy local address tables (LATs).

Caching Servers Caching servers accept recursive queries from DNS clients (either stub resolvers or other name servers) and resolve those queries by contacting authoritative name servers. When designing DNS, you must consider Internet access. Although any DNS server can be configured to accept recursive queries, deciding which servers are allowed to access Internet root servers requires some planning. Because most DNS exploits and vulnerabilities target servers with recursion enabled, enabling recursion on a server can increase that server’s risk of being susceptible to a DNS exploit.

www.bluecatnetworks.com

11

DNS and DHCP Best Practices: Architectures that Work

Separation of Services BlueCat recommends separating internal authoritative DNS services from caching services. A server that hosts internal authoritative zones should never access the Internet, and a caching server that requests name services from Internet root servers should not contain zone information. This secures authoritative zones against many DNS exploits, such as cache poisoning attacks, which rely on recursive lookups being enabled to exploit the DNS server. Forwarding DNS forwarding improves caching performance by building a central cache of answers from which other DNS servers can draw. In a typical forwarding environment, clients are configured with the IP address of internal authoritative DNS servers so they can query them directly. Requests for internal resources are answered directly from zone data. Queries for which the name server is not authoritative are sent to another name server known as a forwarder. Upon receiving the “forwarded” query, the forwarder sends an iterative query on behalf of its client to the Internet root servers. When the forwarder receives an answer, it responds to the name server, which in turn responds to the client. All the while, these answers are cached by both the forwarder and the DNS server that sent the original query.

Internal

Where there are a number of authoritative name servers, it is common practice to configure them to forward queries to a smaller number of forwarders. This takes advantage of centralized caches which improves performance significantly. XHA

DMZ

When implementingHidden forwarding, it is a good idea to use multiple forwarders. In Master Cluster thisInternal way, if the first forwarder fails to respond, the second forwarder in the list Firewall can be contacted, and if necessary, the third, and so on. It is also important to vary the order in which forwarders are assigned to distribute the load. In this Placed at each major datacenter way, a single caching server is not overloaded with every query. Secondary DNS

Secondary DNS

Secondary DNS

Internet

YouExternal can configure forwarding in one of two ways: forward-first and forwardFirewall only. With forward-first, if none of the forwarders on its list respond, the Client queries are then sent directly to the root servers. A server configured for Requests forward-only does not attempt to access the Internet root servers, even if all its forwarders fail to respond. We recommend forward-only as the more secure configuration and suggest deploying multiple forwarders, (in high availability pairs where possible) to reduce the occurrences of all forwarders going offline.

External

Clients

DNS Caching Only Servers

Internal DNS Server

www.bluecatnetworks.com

Forwarding of Recursive Queries

12

DNS and DHCP Best Practices: Architectures that Work

Conditional Forwarding A variation on forwarding is the concept of conditional forwarding. Conditional forwarding allows you to configure a server to forward queries for a certain zone to specific name servers. Conditional forwarding is often used within an internal network to send queries to internal departmental or partner name servers. This configuration can be effective if a partner’s DNS server is not available on the Internet, and must be accessed using a private link such as a VPN. Conditional forwarding offers an additional benefit because it relieves some of the load from the main forwarders – as they don’t have to query the entire namespace to receive an answer. Where internal root servers are neither feasible nor required, forwarding provides a simpler alternative.

DHCP DHCP Deployment DHCP server deployments can be more complicated than DNS. While an organization can maintain a handful of servers to run DNS for their network, the same rules do not apply to DHCP. For many organizations, the number of DHCP servers far outweighs the number of DNS required. It is not uncommon for an organization to decentralize DHCP server deployment, while centralizing DNS servers. Depending on a number of factors, you have the option of going with either a centralized or a decentralized DHCP server deployment. Some organizations prefer to place DHCP servers at a head of a regional branch office allowing devices at remote offices to obtain their addresses remotely. A decentralized approach places DHCP servers in remote offices and locations allowing clients to obtain dynamic IP addresses locally. Deciding which approach to take depends on a number of factors including:

www.bluecatnetworks.com



Size of office or branch – does the size of the branch or office warrant its own DHCP server? If the number of devices at a location is quite small, you may be able to get away with manual allocation. This can change quickly however, as the number of devices can grow quickly, especially with the advent of IP phones and wireless devices.



Available bandwidth – if the available bandwidth is an issue, consider deploying DHCP locally. Even though DHCP traffic is considered to be light, an already overtaxed WAN connection may not be able to sustain the level of traffic needed to provide adequate DHCP service.



Address availability – if a device cannot get an address, it falls off the network. In situations where a remote location does not have redundancy built into its DHCP solution, failure of the local DHCP server or losing connectivity with the remote server can result in a loss of addresses. Having a backup solution in the event of connectivity loss is an important part of a disaster recovery plan.

13

al

Hidden Master Cluster Internal Firewall

DMZ

DNS and DHCP Best Practices: Architectures that Work

Placed at each major datacenter Secondary DNS

Secondary DNS

Secondary DNS

External Firewall

Internet

Client Requests

DHCP Failover As with DNS, eliminating single points of failure is more of a necessity than a best practice. Single server deployments are always more susceptible to outages than solutions that take redundancy into account. If a DHCP server goes down for an extended period of time, dynamically-configured IP devices (which are most of the current devices) eventually lose networkExternal connectivity. Redundancy in DHCP design is critical. For maximum reliability, BlueCat recommends running DHCP Failover between two DHCP servers. While you have the option of placing both Clients servers at the same location, for many of the same reasons discussed in the DNS Caching Only Servers DNS section, we recommend separating the servers geographically with one Forwarding of Internal DNS Server Recursive Queries server located at the local site and the second located at the main or regional headquarters. This way, should the local server fail, DHCP services are still provided by the remote server.

Primary DHCP Replication

Secondary DHCP

In a DHCP Failover relationship, each server is aware of any leases assigned by its peer server. This cooperating relationship keeps the address database on each server synchronized. As a result, Failover is able to provide service continuity in the event of hardware, software or network failure without the need to manually reconfigure address pools.

www.bluecatnetworks.com

14

DNS and DHCP Best Practices: Architectures that Work

Summing Up As organizations become increasingly network-dependent, IP networks continue to grow in size and complexity. The threats to network security are also increasing and evolving. DNS and DHCP core services must be designed to address the challenges of modern, dynamic network infrastructures. While there are always tradeoffs to be made in any network design (particularly when budgetary constraints are factored in), designs that take industry best practices into account allow you to minimize the negative impact of design tradeoffs as much as possible. The recommendations made in this document are intended to help network designers build secure networks, reduce the risk of service outages that can adversely impact business operations and enhance network performance. They are based on BlueCat’s extensive experience working with a range of clients including some of the most demanding and secure organizations in the world. BlueCat is well-equipped to provide guidance and expertise to organizations looking to improve security, lower costs and increase IT efficiency. BlueCat’s physical and virtual DNS and DHCP appliances are purpose-built to meet the needs of any organization. They also allow organizations to securely manage change and growth with unsurpassed scalability and future-ready support for IPv6 and DNS security extensions (DNSSEC).

www.bluecatnetworks.com

15

At BlueCat, we believe the explosive growth of connected devices requires a more intelligent network to ensure reliable, secure, always-on application access and connectivity. BlueCat IP Address Management (IPAM) solutions provide a smarter way to connect mobile devices, applications, virtual environments and clouds. With unified mobile security, address management, automation and self-service, BlueCat offers a rich source of network intelligence that can be put into action to protect your network, reduce IT costs and ensure reliable service delivery. Enterprises and government agencies worldwide trust BlueCat to manage millions of devices and solve real business and IT challenges – from secure, risk-free BYOD to virtualization and cloud automation. Our innovative solutions and expertise enable organizations to build a network infrastructure that is more scalable, reliable and secure, as well as simplify the transition to next-generation technologies including IPv6, DNSSEC, M2M and SDN.

www.bluecatnetworks.com

© 2013 BlueCat Networks. All rights reserved. The BlueCat logo and IPAM Intelligence are trademarks of BlueCat Networks, Inc. All other product and company names are trademarks or registered trademarks of their respective holders. BlueCat assumes no responsibility for any inaccuracies in this document. BlueCat reserves the right to change, modify, transfer or otherwise revise this publication without notice.