BookmarkSubscribeRSS Feed

A first look at Azure Monitor Logs for AKS clusters

Started ‎12-11-2020 by
Modified ‎12-11-2020 by
Views 7,850

SAS has released an open-source project in GitHub called SAS Viya Monitoring for Kubernetes, to provide observability tools for monitoring, logging and alerting in SAS Viya 4. I would encourage customers who get SAS Viya 4 to also deploy SAS VIya Monitoring for Kubernetes, because it's really good. However, it is not the only option for monitoring, logging and alerting.

 

There are several other open-source and commercial logging solutions that work with Kubernetes (k8s) deployments. And all of the big three cloud hosting providers have their own native monitoring, logging and alerting solutions, which also support k8s.

 

Today we'll take a first look at one of these, which will be relevant for anyone hosting their SAS Viya 4 deployment in Microsoft's Azure Kubernetes Services (AKS). Microsoft Azure's native monitoring, logging and alerting solution is called Azure Monitor. I have recently had a first look at one part of that solution: Azure Monitor Logs.

 

1_Azure-Monitor-Logs-logo.png

Over two posts, I'd like to share what I've found so far and hopefully lay a foundation for a more in-depth examination in future. To begin with, in this post we'll see how to create a Log Analytics Workspace from the Azure CLI, how to enable monitoring, and how to reverse both those steps too: disable monitoring and delete the Log Analytics Workspace. They do cost money, after all!

 

In my next post, we'll use SAS Studio in a SAS Viya 4 deployment on Azure to create some distinctive log messages, and then use Azure Monitor Logs to explore log messages in general and find those distinctive messages. This should serve as a simple introduction to the tools and a starting point from which to explore log analysis in Azure Monitor further.

 

First, create your Azure AKS cluster

 

To follow the process of setting up Azure Monitor so that we can play with SAS Viya logs in Azure Monitor Logs, you need a running Azure Kubernetes Service (AKS) cluster. You don't have to set up monitoring before you deploy SAS Viya into that cluster, but it is quite a good idea because the monitoring tools can give you visibility of the progress and health of the deployment process, as well as then allowing you to monitor your SAS Viya 4 deployment too. You can of course also deploy Azure monitoring after deploying SAS Viya 4 in Azure AKS.

 

I set up my Azure Viya deployment by running some handy scripts in an internal SAS Viya 4 Deployment Workshop. For that reason, some of the variable naming conventions I've adopted below follow patterns used in that workshop. But you could deploy Viya 4 on AKS in any valid way, and the following should work just fine.

 

If you don't have a Viya 4 deployment running in AKS, be aware that it may take a little time to stand one up. Running the scripts I used, which intentionally provision small host machines, seems to take about 45 minutes to 1 hour.

 

All the following code examples have a green bar at the side if they are commands you can run (perhaps after modifying them to suit your needs):

 

Green bars are for commands you can run

 

The contents of files, expected output of commands and other things that you are not meant to run as commands are shown in boxes with a blue bar at the side:

 

Blue bars are for files and example command output

 

Finally, this post includes the code of an example Kusto query from the Microsoft Azure Monitor Logs site, and that is shown in a box with a purple bar:

 

Purple bars are for Kusto queries

Don't worry if the colors are not clear, the words also explain what the content in each box is.

 

Create a Log Analytics Workspace from the CLI

 

Note: You should be aware that creating an Azure Log Analytics Workspace incurs a modest cost, in addition to the cost of your Azure AKS cluster.

I will assume that the whole process we are about to follow is going to be run from the Azure Command Line Interface (CLI) in a bash shell. Other options are definitely available - not least PowerShell and the Azure Portal web interface.

 

Begin by opening an Azure CLI shell session, and signing in as a user who has access to the subscription that should be charged for resources you use, i.e. in running the Log Analytics Workspace and in running your Kubernetes cluster.

 

Like pretty much everything in Azure, your new Log Analytics Workspace must be associated with an Azure Subscription, so that Microsoft knows who to charge for its running costs. Before we create the Log Analytics Workspace, let's list the subscriptions you have access to, from the Azure bash shell prompt:

 

az account list -o table</pre

 

If the Azure subscription (let's suppose it is called subscription-name-here) which you intend to have charged for the cost of the Azure Log Analytics Workspace is not currently the default, you can make it the default like this:

 

az account set -s "subscription-name-here"

 

Now we can create a text file called deploylaworkspacetemplate.json containing a JSON-format template for your new Log Analytics Workspace resource. The file's contents should be like this:

 

{
    "$schema": "https://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "workspaceName": {
            "type": "String",
            "metadata": {
            "description": "Specifies the name of the workspace."
            }
        },
        "location": {
            "type": "String",
            "allowedValues": [
            "eastus",
            "westus"
            ],
            "defaultValue": "eastus",
            "metadata": {
            "description": "Specifies the location in which to create the workspace."
            }
        },
        "sku": {
            "type": "String",
            "allowedValues": [
            "Standalone",
            "PerNode",
            "PerGB2018"
            ],
            "defaultValue": "PerGB2018",
            "metadata": {
            "description": "Specifies the service tier of the workspace: Standalone, PerNode, Per-GB"
        }
        }
    },
    "resources": [
        {
            "type": "Microsoft.OperationalInsights/workspaces",
            "name": "[parameters('workspaceName')]",
            "apiVersion": "2015-11-01-preview",
            "location": "[parameters('location')]",
            "properties": {
                "sku": {
                    "Name": "[parameters('sku')]"
                },
                "features": {
                    "searchVersion": 1
                }
            }
        }
    ]
}

 

Next, set up some environment variables to help us create the Log Analytics Workspace with the right properties.

 

The first is something we find useful in GEL (the Global Enablement and Learning team in SAS) for a workshop where multiple students share an Azure subscription. We use the STUDENT variable as a prefix in names for other resources. This means students can create their own instance of a resource, with a unique name that is easily identifiable as belonging to them and which are listed together when sorted alphabetically. At the Azure bash shell prompt:

 

STUDENT=$(az ad signed-in-user show --query mailNickname | sed -e 's/^"//' -e 's/"$//')

 

In real customer deployments, there are less likely to be lots of workshop students sharing an Azure subscription, but there may still be reason to plan for multiple 'tenant' deployments - different teams in the organization, dev/test/prod deployments etc. may all have their own sets of resources. So, while customers probably wouldn't call it 'STUDENT', they may very well want to set up a variable to identify the 'tenant' team, environment etc., and use it as that variable as one component in otherwise 'standard' resource names.

 

The other three variables we set up are a bit more self-explanatory, and use our 'tenant' variable (STUDENT) as a prefix in their names so that multiple instances of the same resource will have unique names:

 

# Set up variables for parameters
export RESOURCE_GROUP_NAME=${STUDENT}viya4aks-rg
export DEPLOYMENT_NAME=${STUDENT}loganalyticsdeployment
export WORKSPACE_NAME=${STUDENT}loganalyticsworkspace

Now we are ready to create the Log Analytics Workspace, by running this command:

 

# Create the Log Analytics Workspace - this can take a few seconds or a few minutes
az deployment group create --resource-group $RESOURCE_GROUP_NAME --name $DEPLOYMENT_NAME --template-file /path/to/deploylaworkspacetemplate.json --parameters workspaceName=$WORKSPACE_NAME

   

Enable Monitoring for your AKS cluster

 

The command to enable the monitoring add-ons for an AKS cluster takes an argument which passes in the ID of the Log Analytics Workspace to which metrics, logs etc. should be sent. Run this command to get that ID:

 

LOG_ANALYTICS_WORKSPACE_RESOURCE_ID=`az resource list --name $WORKSPACE_NAME | jq -r '.[] | .id'`
echo LOG_ANALYTICS_WORKSPACE_RESOURCE_ID=$LOG_ANALYTICS_WORKSPACE_RESOURCE_ID

 

Now we can enable monitoring addons for your AKS cluster with name ${STUDENT}viya4aks-aks in the resource group specified in $RESOURCE_GROUP_NAME:

 

az aks enable-addons --addons monitoring -n ${STUDENT}viya4aks-aks -g $RESOURCE_GROUP_NAME --workspace-resource-id $LOG_ANALYTICS_WORKSPACE_RESOURCE_ID

   

Check the monitoring agent was deployed to your AKS cluster

 

Enabling Azure's monitoring addon deploys a daemonset (a type of pod of which one-and-only-one copy runs on a node) called omsagent to each node in your AKS cluster. Let's check that this daemonset is actually present on each node in your cluster. First, see how many nodes you have at the moment (or skip this step if you already know):

 

# Show active nodes in the Kubernetes cluster at this time
kubectl get nodes

Here's some example output from the GEL deployment workshop AKS cluster I have been using, which has 5 nodes:

 

NAME                                STATUS   ROLES   AGE     VERSION
aks-cas-22084866-vmss000000         Ready    agent   3h26m   v1.18.6
aks-compute-22084866-vmss000000     Ready    agent   3h26m   v1.18.6
aks-stateful-22084866-vmss000000    Ready    agent   3h26m   v1.18.6
aks-stateless-22084866-vmss000000   Ready    agent   3h26m   v1.18.6
aks-system-22084866-vmss000000      Ready    agent   3h28m   v1.18.6

 

Use kubectl to check that the 'omsagent' daemonset has been deployed to each node in your AKS Kubernetes cluster, indicating that monitoring is enabled:

 

# Show how many nodes have the omsagent daemonset deployed
kubectl get daemonset omsagent -n kube-system -o wide

 

Here is the output I get - the omsagent is deployed on all five of my nodes:

 

NAME       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE     CONTAINERS   IMAGES                    SELECTOR
omsagent   5         5         5       5            5           <none>          3m44s   omsagent     mcr.microsoft.com/azuremonitor/containerinsights/ciprod:ciprod08072020   component=oms-agent,tier=node

   

Confirm log data is being collected in Azure Monitor Logs

 

To confirm that log data is being sent to the log analytics workspace, open Azure Portal (if it is not already open), navigate to Azure Monitor, and in the pane on the left, choose Logs.

 

Azure Monitor showing the Logs menu item in the left paneAzure Monitor showing the Logs menu item in the left pane

 

Select any image to see a larger version.
Mobile users: To view the images, select the "Full" version at the bottom of the page.

 

A new logs query should open. Near the top-left corner of the query, click 'Select scope': 

 

In the new query window, click ‘Select scope’In the new query window, click ‘Select scope’

 

 

The Select a scope panel appears. Find and select your AKS cluster (a 'Kubernetes service') in this panel. For example, if your username is sukdws, it should be called sukdwsviya4aks-aks.

 

Tip: A warning message appears near the top of this panel when you select something saying "You may only choose items from the same resource type". Do not worry about this. We will only select one resource, a Kubernetes service, so there is no risk of selecting more than one resource type.

Select a scope: choose your AKS cluster, resource type = “Kubernetes service”Select a scope: choose your AKS cluster, resource type = “Kubernetes service”

 

 

When you have selected your AKS cluster (type: Kubernetes service - it's the same thing), click Apply.

 

Now we'll try one of the example queries to see if we are collecting data. Click the 'Example queries' button, on the right above the New Query window. Select Kubernetes Services from the list on the left, and then scroll down in the panel on the right until you see 'List container logs per namespace':

 

Azure Monitor Logs example query: List container logs per namespaceAzure Monitor Logs example query: List container logs per namespace

 

 Click Run in the box for List container logs per namespace. The example Kusto (KQL) code in this query is:

 

// List container logs per namespace
// View container logs from all the namespaces in the cluster.
ContainerLog
|join(KubePodInventory| where TimeGenerated > startofday(ago(1h)))//KubePodInventory Contains namespace information
on ContainerID
|where TimeGenerated > startofday(ago(1h))
| project TimeGenerated ,Namespace , LogEntrySource , LogEntry

 

The results look something like this: 

Results of example queryResults of example query

 

These are clearly logs from our deployment, but it is not yet easy to see important things like:

  • which container or service generated the log message
  • the severity of the message (DEBUG, INFO, WARNING, ERROR etc.) where there is one
  • the text of the log message

 

We will look at the query and how to make a better one that presents SAS Viya log data more clearly in my next post.    

Cleanup: Disable Monitoring and Remove the Log Analytics Workspace

 

Azure Monitoring is incurring cost while it collects log data and sends it to your Log Analytics Workspace. To avoid wasting money, you should disable monitoring and delete your Log Analytics Workspace when you have finished with it.

 

This should disable Monitoring Addons for your AKS cluster with name ${STUDENT}viya4aks-aks, assuming you have set variables with the same values as before for STUDENT and RESOURCE_GROUP_NAME. This may take a couple of minutes:

 

az aks disable-addons --addons monitoring -n ${STUDENT}viya4aks-aks -g $RESOURCE_GROUP_NAME

 

That done, again with the same values for the RESOURCE_GROUP_NAME and DEPLOYMENT_NAME variables that you set before, this should delete the Log Analytics Workspace:

 

az deployment group delete --resource-group $RESOURCE_GROUP_NAME --name $DEPLOYMENT_NAME

 

Look out for my next post where we'll explore Azure Monitor Logs queries in a bit more detail. As always, please do leave a comment below, and click the 'Like' below if you feel this post has earned it.

 

To be continued, so see you next time!

Comments

Hi Dave

Thanks for the step by step guidance to implement Azure Monitor Logs for Viya2020.

 

Just a heads-up, during implementation I found that RESOURCE_GROUP_NAME needs to be created before we export and assign while creating Log Analytics Workspace, else we might get the below error.

Capture.PNG

 

Regards,

Sanket

 

Version history
Last update:
‎12-11-2020 06:03 AM
Updated by:
Contributors

SAS Innovate 2025: Call for Content

Are you ready for the spotlight? We're accepting content ideas for SAS Innovate 2025 to be held May 6-9 in Orlando, FL. The call is open until September 25. Read more here about why you should contribute and what is in it for you!

Submit your idea!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags