Application Monitoring – Best Practices

Application Monitoring Best-Practices

You know that you need to monitor the performance of your systems to ensure that they keep running efficiently. Applications are very important to your business’s operations so they need to be monitored. Automated monitoring reduces the costs of watching the activities of your applications and all of the services that support them.

Automation is not as straightforward as it sounds. For a start, there are a lot of different techniques that can be used to monitor applications, and the main reason for that is that there are many different types of applications.

What is an application?

It seems fatuous to start by explaining what an application is. You know what applications you have – they are the software packages that your users access to do their jobs. However, the word “application” is vague. Remember that protocols are also known as applications, such as the File Transfer Protocol (FTP), the Simple Mail Transfer Protocol (SMTP), and Kerberos. So what does an application monitoring system monitor, software packages, or network services?

Even the software applications that you install have many layers. Is a Web server an application or is it part of the infrastructure? A database supports the Web pages that users access. The database management system (DBMS) is an application, and so is HTTP, which delivers communications between applications. However, HTTP forms part of the communication system, so maybe that’s part of the infrastructure.

Whether an element of a system is defined as an application or infrastructure, it still needs to be monitored.

Application dependencies

Starting from a user-accessed system, you search for all of its components in the package and document them. You then look at each of those elements, which are mostly applications in themselves, and look at their components. This process results in the creation of a hierarchy of applications, down to services, and then further down to system resources.

An application monitor needs information on the components of an application and how they work together. Automated tools can identify that hierarchy automatically. Without an application dependency map, it is impossible to correctly monitor the activities of an application and spot when resources are running low.

An application performance problem is more likely to be the fault of one of the systems that support the user-facing application than that application itself.

The application dependency map provides a path to search when a front-end application’s performance becomes impaired. This record cuts out hours of research and will enable the real cause of an application’s problems. This task is called root cause analysis and a good application monitor will do all of that work for you.

Application Monitoring Best Practices Keypoint

Use an automated monitoring tool that will independently track and record all supporting systems and draw an application dependency map. This will provide all of your root cause analysis when problems arise.

Platform compatibility

Cloud-based systems are rapidly taking over from in-house servers as the hosts of applications. This situation complicates monitoring tasks because services are often integrated into the platform in a protected manner that prevents tampering but also blocks monitoring efforts.

Many businesses have applications running on cloud platforms while also operating applications on their servers. This is known as a hybrid environment and requires extensive data transfers back and forth between cooperating applications that need to share information. Applications can span a platform with files managed on one system but accessed by tools delivered from a separate SaaS package operated by a different provider.

Cloud platforms include reporting services. These are built into AWS, Google Cloud Platform, and Azure and are free to use. However, siloed reporting won’t give you the benefits that cumulative statistics can bring. You need an application monitor that can gather performance data from many different platforms, correlate it, summarize it, and analyze it.

As a minimum, you need an application monitoring system that can access all of the platforms that you currently use. However, getting service with the widest possible capabilities gives you the freedom to choose other platforms to support your users in the future without having to change your monitoring system.

Application Monitoring Best Practices Keypoint

Choose a monitoring tool that can operate in hybrid environments and has a long list of integrations to cover as many platforms as possible.

Delivery technologies

Cloud, on-premises, and virtual systems offer a range of delivery options. Packaging a module together with all of the services that it needs in a container makes delivery easier because the provider doesn’t have to issue extensive installation instructions to the buyer and user of these functions.

Containers offer strong security against tampering because it isn’t possible to control the operations of the carried function by infecting the operating system of the host. However, they also complicate monitoring because each instance is like a mini server and so has to be examined for performance statistics individually. A service that can aggregate activity metrics across multiple containers saves time because it removes the need to manually unify data from multiple sources.

On-premises hypervisors and the cross-platform equivalent, hyperconverged environments, are other technologies that can block access from the host’s operating system into the application’s functions. Thus, integration with each of these virtual platforms also needs to be built into your chosen application monitoring system.

Another benefit of virtual hosting for systems is that it enables the trialing of new software without the dangers of clashes in the required system settings for each. It is possible to isolate a new service with virtualization to buy time for security assessments before integrating the new product into your system. VMs and containers also allow different versions of the same software or rival systems to be tested and evaluated in parallel for performance assessment.

Containers are useful for testing in development environments and for onboarding in operations scenarios and application monitoring plays an important role in both of these situations.

Application Monitoring Best Practices Keypoint

Look for application monitoring tools that include the capability to query containers and virtualizations.

Serverless systems

Cloud hosting has many formats and can be provided without a full allocation of virtual server space. Serverless systems offer efficient hosting options because the user only gets charged for the space that is used and doesn’t require CPU capacity to be reserved and paid for even when it is not in use.

Modules that are supported by serverless hosting are known as microservices and they can be delivered to developers through frameworks, libraries, APIs, or developer toolkits. The proliferation of these prewritten plug-in functions speeds up and standardizes the development of Web applications but they complicate monitoring. Without the presence of a server package, there is no adjacent space to run associated statistics extraction programs.

The difficulty of monitoring microservices was resolved by the creation of telemetry standards. These are messaging protocols that developers of Web applications can use to produce activity reporting log messages. By following these standards, application monitoring packages can provide the same level of activity tracking and performance monitoring as traditional application monitoring software packages.

The technique of monitoring microservices is called distributed tracing and there are several standards for messaging within this framework. The three main options are called OpenTracing, OpenCensus, and OpenTelemetry. The third of these was created by a merger of the first two. However, there are still many systems in circulation that follow either the OpenTracing or OpenCensus systems. Fortunately, OpenTelemetry is backward compatible, so if you have an application monitor that includes capabilities in that protocol, you also have the other two.

The location of microservices is obscured by their method of delivery. A developer can access an entire package by inserting just a line in the program. The functions that are called by that line need to be identified and monitored as well as the calling program. Application discovery supported by distributed tracing will provide a full dependency map that not only identifies the microservices behind APIs and platforms but the layers of serverless-hosted functions that compose those microservices.

Application Monitoring Best Practices Keypoint

Distributed tracing is becoming increasingly important for application monitoring, particularly for Web applications. Distributed tracing also gives you the capability to monitor the activities of mobile apps, which are usually backed by microservices.

Setting up application monitoring

Even application monitoring tools that include autodiscovery need a starting point and that includes the granting of access to the platform that hosts the user-facing application that is to be explored.

The automation in an application tool lies with a system of alerts that are triggered by performance thresholds. While most application monitors are shipped with pre-set alert rules, more can be added and existing threshold levels can be adjusted. An example of the type of events that could trip an alert is the available capacity of a CPU or memory utilization.

Rather than waiting for a resource to be fully used up, a threshold would be set at a warning level, such as 75 percent, which gives notified technicians time to take action before the resource is completely exhausted.

It is also possible to create composite thresholds that will only trigger an alert if all conditions of the rule are met.

Alerts are shown in the system console. However, unattended automation can only be safely implemented if there is a mechanism to send those alerts outside the system to messaging services to which technicians routinely make access, such as email, SMS, collaboration system, or service desk ticket routing service.

Application Monitoring Best Practices Keypoint

Alert thresholds and notification routines need to be set up in the application monitoring system console.

Automated responses

Some application monitors include automated responses. These need to be set up before they are active. Responses are written as playbooks in the form of an alert condition and an associated action.

The implementation of automatic actions should be conducted with caution as binary decisions lack subtlety or judgment and can render a system unusable. For example, a threshold for user access capacity could be calibrated at too low a level and automatically log all users off the system. Such a tool would be counterproductive to the aims of installing the application monitor, which is to improve efficiency.

Application Monitoring Best Practices Keypoint

Response automation can be set up as actions associated with triggers but should be used with caution.

Automated Application Monitors

The central rule in the best practices of application monitoring is to get a competent tool to perform the task for you.

What should you look for in an application monitor? 

We reviewed the market for application monitoring systems and analyzed tools based on the following criteria:

  • Automated discovery
  • Application dependency mapping
  • Hybrid system monitoring
  • Distributed tracing
  • Alerts
  • A free trial or a demo that provides a no-cost assessment period
  • Value for money from s comprehensive monitoring tool that is offered at a fair price

With these selection criteria in mind, we identified flexible application monitoring services that can interface to a range of platforms.

Here is our list of the three best application monitoring systems:

  1. AppOptics EDITOR’S CHOICE Infrastructure and application monitoring from a SaaS package that provides dependency mapping, distributed tracing, and root cause analysis. Get a 30-day free trial.
  2. Datadog APM A SaaS APM with application dependency mapping, root cause analysis, and distributed tracing.
  3. Dynatrace Full-stack Monitoring A cloud-based service that uses  AI in its infrastructure and application monitoring.

You can read more about each of these options in the following sections.

1. AppOptics (EDITOR’S CHOICE)

AppOptics Python monitoring

AppOptics provides  Infrastructure Monitoring for hybrid environments and Application Monitoring with autodiscovery, application dependency mapping, and distributed tracing.

Key Features:

  • Hybrid environments
  • Autodiscovery
  • Infrastructure and APM
  • Distributed tracing
  • Code profiling

The AppOptics system is a SaaS package with a cloud-hosted console that is accessed through any standard Web browser. The monitor works with customizable thresholds that trigger notifications if performance problems are detected.

The package includes a code profiler that automatically detects the language of the framework of each application code segment. It operates on systems written on Java Virtual Machine (JVM), .NET, and WCF. It tracks through PHP, Python, Ruby, and Node.js.

Pros:

  • A SaaS platform with a Web-based console
  • Customizable alerts
  • Application dependency mapping and root cause analysis
  • Code profiling for error tracking
  • Suitable for development and operations

Cons:

  • No network monitoring

EDITOR’S CHOICE

AppOptics is our top pick for an application monitor because it offers monitoring from user-facing applications down to infrastructure, creating an application dependency map. The AppOptics system automatically discovers and maps application dependencies, covering on-premises, cloud, and virtual systems and it includes a code profiler.

Official Site: https://my.appoptics.com/sign_up

OS: Cloud-based

2. Datadog APM

Datadog APM

Datadog APM is part of a cloud platform that offers many system monitoring and management tools. The APM is centered on a distributed tracing service and is also available together with a code profiler.

Key Features:

  • Application discovery
  • Dependency mapping
  • Distributed tracing
  • Code profiling option

The Datadog APM includes a deployment tracker that is able to identify deployment packages with settings variation that allows different versions to be run simultaneously and compared.

Code Profiling is offered in a higher plan that still gets you the standard APM features. This will scan through code written in Java, .NET, PHP, Node.js, Ruby, Python, Go, or C++ as it runs.

Pros:

  • Related modules available
  • Optional code profiler that traces through plain-text programs
  • Deployment tracker for testing and shipping
  • Alerts with notification and routing options

Cons:

  • You need to subscribe to many modules to get full-stack observability

Other modules from Datadog work in concert with the APM and extend its capabilities. These include an Infrastructure Monitoring package and Network Monitoring options. You can get a 14-day free trial of Datadog that gives access to all modules.

3. Dynatrace Full-stack Monitoring

Dynatrace Python monitoring

Dynatrace offers a SaaS platform with a range of packages that include the Full-stack Monitoring option. This service includes the ability to monitor all layers of the application support stack from user-facing systems, through infrastructure, down to server resources.

Key Features:

  • Application dependency mapping
  • Root cause analysis
  • AI-based predictive alerts

Dynatrace implements application dependency mapping that uses AI to connect the identification of contributing elements and lay down tracks for root cause analysis. AI is also used for predictive alerts that can calculate when all processes running on the same server are going to run down resources or cause locking. The service also provides distributed tracing and code profiling.

Pros:

  • Offers error identification and bug tracking for development
  • Scans log files as well as generates system warnings through distributed tracing
  • AI processes for root cause analysis

Cons:

  • Log management is an extra module

L’article Application Monitoring – Best Practices est apparu en premier sur Comparitech.

Leave a Comment

Your email address will not be published.