Get Study Material for 100% Free!
  • Blog
  • CISCO
  • Introduction to Cisco WAAS (Wide Area Application Services)

Introduction to Cisco WAAS (Wide Area Application Services)

Introduction to Cisco WAAS (Wide Area Application Services)

Cisco WAAS (Wide Area Application Services) is a solution designed to bridge the gap between application performance and infrastructure integration in a WAN environment. Cisco WAAS can use robust optimizations at multiple layers to ensure high-performance access to remote application infrastructures such as file services, email, intranets, portal applications, and data protection. Cisco WAAS not only improves performance by reducing the factors that limit WAN performance, but it also allows IT organizations to better integrate their distributed infrastructure, better manage costs, and protect and comply with data. We will be able to secure a stronger position in terms of.

Cisco Wide Area Application Services Data Sheet - Cisco WAAS

IT organizations face two conflicting challenges: delivering high application performance to increasingly distributed employees and consolidating costly infrastructure to streamline management, improve data protection, and keep costs down. It is working. Wide area networks (WANs) isolate growing remote workers from where IT will deploy their infrastructure. The result is a significant delay, packet loss, congestion, and bandwidth limitations. All of this can affect your ability to interact with your application. Performance method.

The purpose of this book is to elaborate on the Cisco WAAS solution, including a thorough examination of how the Cisco WAAS solution is designed and deployed. This chapter provides an overview of the performance barriers presented by the WAN and a technical overview of Cisco WAAS. This chapter also describes the Cisco WAAS software architecture and outlines how each of the basic optimization components overcomes these application performance barriers. This chapter provides a comprehensive solution for how Cisco WAAS fits into the network-based architecture of optimization technologies and deploys these technologies in combination with Cisco WAAS to improve application performance over the WAN. It ends with an explanation of how to provide it.

Understanding Application Performance Barriers

Before investigating how Cisco WAAS addresses the performance challenges caused by WAN network conditions, it is important to understand how those WAN conditions affect the performance of your application. Today’s applications are more robust and complex than applications 10 years ago. This trend is expected to continue. Many enterprise applications are tiered and have a presentation layer (usually composed of web services) that accesses the application layer of the server that interacts with the database layer (commonly referred to as the N-tier architecture).

Each of these different layers typically interacts using middleware, which is a subsystem that connects different software components or architectures. At this point, most of the applications currently in use are client/server, and only a single layer on the server-side is involved (such as a simple file server). However, higher levels of application infrastructure are becoming more popular.

Layer 4 Through Layer 7

Server application instances, whether single-tier or n-tier, interact with user application instances primarily at the application layer of the open system interconnect (OSI) model. This tier exchanges control and data messages in the application tier and performs functions based on the business process or transaction that is running. For example, a user can use HTTP “GET” to get an object stored on a web server.

Interactions at this level are complex because the number of operations that can be performed by proprietary or even standards-based protocols can literally reach hundreds or thousands. Between the application layers of a particular node pair, there is a hierarchical structure between the server application instance and the user application instance. This also adds complexity and performance limitations.

For example, data transferred between application instances can pass through a shared (and pre-negotiated) presentation layer. Many applications have built-in data representation semantics, so this layer may or may not be present, depending on the application. This layer serves to ensure that the data complies with certain structures such as ASCII and Extensible Markup Language (XML).

Data can be delivered from the presentation layer to the session layer. The session layer is responsible for establishing an overlay session between the two endpoints. The session layer protocol provides applications with the ability to manage checkpoints and recovery for atomic upper-layer protocol (ULP) exchanges. This happens at the transactional or procedural level compared to the transfer of raw segments (provided by the transmission control protocol, which will be discussed later).

Like the presentation layer, many applications may have built-in session management semantics and may not use a separate session layer. However, some applications, typically those that use remote procedure calls (RPC), require a separate session layer.

Data sent over the network is processed by the transport protocol, regardless of whether the data exchanged between the user application instance and the server application instance must use the presentation layer or the session layer. The transport protocol is primarily responsible for data multiplexing. That is, the data sent by the node can be processed by the appropriate application process on the receiving node.

Commonly used transport layer protocols include Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Stream Control Transmission Protocol (SCTP). Transport protocols generally provide guaranteed delivery and are responsible for adapting to changing network conditions such as B. Bandwidth change or congestion.

Some transport protocols, such as UDP, do not provide such functionality. Applications that use UDP either implement their own means of guaranteeing delivery or congestion control, or do not require these features in their applications.

The aforementioned components, including transport, session, presentation, and application layers, represent a group of services that determine how application data is exchanged between different nodes. These components are commonly referred to as Layer 4 to Layer 7 Services, L4-7 Services, or Application Network Services (ANS).

The L4-7 service relies on packet routing and forwarding services provided by lower layers such as networks, data links, and physical layers to move segments of application data within network packets between communication nodes. Except for network delays caused by distance and speed of light, L4-7 services typically add the maximum amount of operational delay to application performance.

This is to buffer data in and out (transport layer), maintain long-term sessions between nodes (session layer), and ensure that data meets presentation requirements (presentation layer). This is due to the huge amount of processing that needs to be done. Exchange control and application Data messages based on running tasks (application layer).
Figure 11 shows an example of how L4-7 presents application performance challenges.

Figure 1-1

L4-7 Performance Challenges
Performance challenges caused by L4-7 can be broadly categorized into the delay, bandwidth inefficiency, and throughput categories. These are discussed in the next three sections.

Latency

L4-7 Latency is the culmination of latency components added by each of the four layers of application, presentation, session, and transport. The presentation, session, and transport layer delays are usually low and have minimal impact on overall performance, so this section focuses on the delays that occur in the application layer. The delay added by the L4-7 processing of the node itself is important, but usually minimal compared to the delay of the network itself, much less than the performance impact of the application layer delay introduced by the protocol. Please note in particular. Chattering occurs on high latency networks.

Application layer delays are defined as application protocol operational delays and typically occur when an application or protocol exhibits send and wait for behavior. You can see an example of application-level latency when accessing files on a file server using the Common Internet File System & # 40; CIFS & # 41 ;. Be observed. A protocol that is popular in environments with Windows clients and Windows servers or NAS (Network Attached Storage) devices accessed by Windows clients. In such cases, the client and server must exchange a series of “administrative” messages before the data is sent to the user.

For example, the client first needs to establish a session with the server and to establish that session, it needs to validate the user’s authenticity against an agency such as a domain controller. Next, the client needs to connect to a specific share (or named pipe). This requires verification of client authentication. Once the user is authenticated and authorized, a series of messages are exchanged, traversing the directory structure and collecting metadata.

After the file is identified, a series of lock requests must be sent in sequence (based on the file type). You can then exchange file I / O requests (read, write, seek, etc.) with the user. server. Each of these messages requires a small amount of data to be exchanged across the network and is often overlooked in a local area network (LAN) environment, but it introduces significant delays when operating over a WAN.

Figure 12 shows an example where application layer latency in a WAN environment alone can have a significant impact on response time and overall user-perceived performance. In this example, the one-way delay is 100 ms, so the data exchanged in 600 ms is only 3 KB.

Note that the presentation, session, and transport layers add delay, but are usually negligible when compared to the application layer delay. Also, note that the performance of the transport layer itself is affected by delays commonly perceived by the network due to slowdowns associated with outbound window offload and other factors. The impact of network delays on application performance is described in the next section, Network Infrastructure.

Figure 1-2

Latency-Sensitive Application Example

Bandwidth Inefficiencies

The lack of available network bandwidth (discussed in the Network Infrastructure section) and the inefficiency of the application layer in the area of data transfer create barriers to application performance. This performance barrier appears when the method of exchanging information between two communicating nodes is inefficient in the application.

Suppose you have 10 users in a remote office connected to your company’s campus network via T1 (1.544 Mbps). If these users are using an email server (such as Microsoft Exchange) on their corporate campus network and an email message with an attachment of 1 MB is sent to each of these users, the email message is per user. Must be encrypted to be sent once or 10 times. Such scenarios can significantly overload the enterprise WAN, and similarities can be found in many different applications.

Redundant email attachments are downloaded multiple times by multiple users over the WAN
Multiple copies of the same file are stored on a remote file server and accessed by multiple users over the WAN
Multiple copies of the same web object are stored on a remote intranet portal or application server and accessed by multiple users over the WAN

In many cases, the data contained in objects accessed by the various applications used by remote office users can contain a significant amount of redundancy. For example, when another user is accessing “or a different version of that file,” the user can send an email attachment to another user over the corporate WAN. Use of file server protocol over WAN. The packet network itself has historically been independent of the application network. That is, data properties were generally not considered, investigated, or abused as information traveled through the enterprise network.

Related Reading: Recertification AWS, CISCO: Why You Need it for Compliance Training

Related Posts

Related Posts