BookmarkSubscribeRSS Feed

Export log messages from a command line with getlogs.py and SAS Viya Monitoring for Kubernetes

Started ‎02-16-2024 by
Modified ‎02-16-2024 by
Views 642

SAS Viya Monitoring for Kubernetes' log monitoring stack collects log messages and stores them in its own instance of OpenSearch. At the latest release, version 1.2.21, its command-line tool getlogs.py has been moved to 'production' status.

 

getlogs.py is a tool for exporting log messages captured in the log monitoring OpenSearch instance to a file or to the console/stdout, formatted as either CSV or JSON. Use it interactively from a command line or a script. Filter criteria and/or a search string can be specified as command parameters, or you can download, edit, and resubmit log queries written in OpenSearch's built-in domain-specific language (DSL) query syntax, to get the specific log messages you are interested in.

 

In this post, I'll describe my initial experiences setting up and using getlogs.py. The documentation already includes some nice usage examples - I'll add a few of my own too. In a follow-up post, I'll explore downloading, modifying and re-uploading a query with getlogs.py, to see how easy it might be to exploit the richness of the OpenSearch built-in query language from a script.

 

 

Prerequisites for getlogs.py

 

The documentation for getlogs in SAS Help Center lists two requirements: Python 3.11 or later, and an OpenSearch module for python, opensearch-py. Obviously, you'll also need a running instance of the log monitoring stack from SAS Viya Monitoring for Kubernetes version 1.2.21 or later, and a way to access it over the network. This can be either:

  • An ingress for the OpenSearch back-end data store (note this is separate from the OpenSearch Dashboards ingress you might use to access the front-end web application), or
  • A kube config file, giving access to the Kubernetes cluster and the logging namespace where OpenSearch is running, which getlogs.py can use to set up temporary port-forwarding to give it a network path to the OpenSearch back-end data store


Python 3.11

 

If you have a relatively recent version of Linux to deploy the tools on, the Python 3.11 prerequisite and its dependencies is unlikely to be an issue. A quick web search will turn up plenty of guides to installing Python 3.11 if yours is a slightly older version.

 

During my initial attempts it turned out that the Python 3.11 requirement took more effort to satisfy, because the version of Linux we use for our workshop environments is a little (ahem) 'mature', and didn't have Python3.11 available in its usual package manager. Python3.11's dependency on openssl 1.1.1 was tricky to meet without impacting other components running on the same Linux host that my workshop Kubernetes cluster and SAS Viya were running on (upgrading OpenSSL to a major release that was later than everything else in that Linux release expected broke other services). So I used a separate Linux host to experiment with getlogs.py: an instance of Ubuntu Linux running in WSL (Windows Subsystem for Linux) on my Windows desktop. Since the Ubuntu release running in my WSL was more recent, it already had a suitable version of openssl which meant deploying Python 3.11 was simple (and didn't break anything else). I had no problem getting the opensearch-py Python module installed in either scenario.

 

Network path to OpenSearch from the host where you run getlogs.py

 

getlogs.py offers two ways to define a network path from your client machine to the back-end OpenSearch data store.

OpenSearch ingress

 

The samples included with SAS Viya Monitoring for Kubernetes include alternative user-values-opensearch.yaml files illustrating how to set up an ingress for OpenSearch.

 

 

If the user-values-opensearch.yaml file in your USER_DIR directory struture has extra attributes like those illustrated in your chosen sample, Kubernetes will set up the corresponding type of ingress for OpenSearch when you deploy the log monitoring stack.

 

I noted that I could only get an ingress to OpenSearch working properly if I enabled TLS (HTTPS) for it. I got an http 503 bad gateway error when I tried without TLS/HTTPS. Internally, OpenSearch's network access is always over TLS/HTTPS anyway, and it was no problem for me to configure it with HTTPS, so I just did that. Maybe there is a way to get it working without TLS, but I didn't try that particularly hard.

 

Prior to testing getlogs.py worked, for convenience, I exported four environment variables: OSUSER, OSPASSWD - OSHOST and OSPORT, as illustrated in the documentation under Connect to OpenSearch with Direct Link to Host, so I didn't have to keep telling getlogs.py how to get to OpenSearch with command line parameters, which are the alternative:

# Try getlogs.py
cd ~/viya4-monitoring-kubernetes/

export OSUSER=admin
export OSPASSWD=mysecretadminpassword
export OSHOST=opensearch.hostname.something-or-other.com
export OSPORT=443

python3.11 logging/bin/getlogs.py

If you don't create your own signed certificate for the TLS connection and provide it in the secret named in that yaml file, I think OpenSearch creates its own self-signed certificate. But your browser/command line client might not trust a self-signed certificate, so I prefer to provide one I know is trusted, and avoid that issue. I didn't play around with that very much though.

Port forwarding

 

The other option, attractive if you prefer not to (or cannot) create an ingress to your log monitoring OpenSearch instance, or if you prefer not to pass a username and password to getlogs.py on the command line, is to provide a kube config file that gives access to the logging namespace in your Kubernetes cluster where that OpenSearch instance runs. Save the kube config file somewhere on a file system accessible to the host where you will run getlogs.py, set a KUBECONFIG environment variable to point to it, and run getlogs.py with a -pf parameter as shown below:

# Try getlogs.py
cd ~/viya4-monitoring-kubernetes/

export KUBECONFIG=~/.kube/config

python3.11 logging/bin/getlogs.py -pf

This tells getlogs.py to set up a temporary Kubernetes port-forward to use while it queries the OpenSearch data store. In some situations this may be considered more secure - I like that we have a choice.

 

Default output

 

Whichever method you use to set up a network path to OpenSearch, with no other parameters on the command line getlogs.py should return the 10 most recently captured log messages, from any logsource/container/pod/namespace, within the past 1 hour, in CSV format. They should look something like the example below - the content will vary depending on what log messages your SAS Viya Monitoring for Kubernetes log monitoring stack most recently happened to capture and stream through to OpenSearch:

Searching index:
Search complete
@timestamp,level,kube.pod,message,_id
2024-02-01T12:58:33.928Z,NONE,canal-qpqww,"2024-02-01 12:58:33.928 [INFO][25162] monitor-addresses/startup.go 432: Early log level set to info
",67065E0E-9E9D-8806-E822-6158E3EACB78


2024-02-01T12:58:33.928Z,NONE,canal-qpqww,"2024-02-01 12:58:33.928 [INFO][25162] monitor-addresses/startup.go 315: Skipped monitoring node IP changes when CALICO_NETWORKING_BACKEND=none
",6C44C501-E7C8-F6ED-1617-FF5CE3E0BA65


2024-02-01T12:58:33.923Z,WARNING,canal-qpqww,Setting GA feature gate ServiceInternalTrafficPolicy=true. It will be removed in a future release.,02BD6FBB-6453-CB17-3658-E5A2AF060D92


2024-02-01T12:58:33.870Z,NONE,canal-wkthm,"2024-02-01 12:58:33.870 [INFO][27083] monitor-addresses/startup.go 432: Early log level set to info
",C32FA169-A498-3410-850B-478F16F9B962


2024-02-01T12:58:33.870Z,NONE,canal-wkthm,"2024-02-01 12:58:33.870 [INFO][27083] monitor-addresses/startup.go 315: Skipped monitoring node IP changes when CALICO_NETWORKING_BACKEND=none
",D3BED951-D4B0-911C-0F8E-B19E62C17EFF


2024-02-01T12:58:33.867Z,WARNING,canal-wkthm,Setting GA feature gate ServiceInternalTrafficPolicy=true. It will be removed in a future release.,4148E479-215D-7056-9318-85368F213D76


2024-02-01T12:58:33.564Z,NONE,canal-q87j6,"2024-02-01 12:58:33.564 [INFO][11868] monitor-addresses/startup.go 432: Early log level set to info
",418CE3BC-B3D4-F13B-C653-32F3A5914F79


2024-02-01T12:58:33.564Z,NONE,canal-q87j6,"2024-02-01 12:58:33.564 [INFO][11868] monitor-addresses/startup.go 315: Skipped monitoring node IP changes when CALICO_NETWORKING_BACKEND=none
",E8E10D1B-74A3-7437-B1E4-952B796E9A69


2024-02-01T12:58:33.561Z,WARNING,canal-q87j6,Setting GA feature gate ServiceInternalTrafficPolicy=true. It will be removed in a future release.,7A4738E1-4196-0FC9-0ABE-EFC504A6C3D5


2024-02-01T12:58:33.123Z,NONE,canal-fs4vm,"2024-02-01 12:58:33.123 [INFO][20724] monitor-addresses/startup.go 432: Early log level set to info
",0E4A2CC2-64FA-C1FD-6235-6943512FC1F0

This serves as a reasonably good validation test that both getlogs.py and your network path (via ingress or port-forwarding) to OpenSearch are configured correctly, and that all of the pre-requisites are met.

 

If you get an error message instead of log output, you'll need to debug the issue before you proceed.

 

Syntax help

 

Like many command-line tools, getlogs.py -h outputs help text, describing the parameters it takes:

python3.11 logging/bin/getlogs.py -h

The parameters fall into three main categories, query search parameters that specify what log messages you want to see, query output parameters that influence the command's output (max rows, whether to a file, CSV or JSON etc.), and connection settings used when you don't set environment variables to provide those values. The start and end time parameters (-st | --start [DATETIME ...] and -en | --end [DATETIME ...] ) are listed under query output settings, but I consider them more query search parameters.

 

Time Zones

 

The datetime values you pass in to the --start and --end parameters (examples towards the end of this post) should be in the local time zone that your host is configured to use. So for example if your host Linux machine is set to Eastern Standard Time (EST), you would specify the start and end times for the period you want to get log messages in the same time zone - EST.

 

However, the datetime values in the @timestamp column in getlogs.py's CSV output are in UTC (Universal Coordinated Time). So for example if you run getlogs.py on a host machine whose local time is EST to request log messages from between 13:00 and 14:00 EST, you should expect results showing log messages with timestamp values between 18:00 and 19:00 UTC time (since EST is 5 hours earlier than UTC). Remember to take this into account when specifying start and end time filters, and when viewing/storing log message output that contains a timestamp column from getlogs.py - it is a potential source of slight confusion.

 

Using getlogs.py with filter and search parameters

 

Namespace

The first thing we will usually do is add a namespace parameter, to get log messages from the SAS Viya platform and any solutions deployed in it. My SAS Viya namespace is gelcorp (and I've set up an ingress and the environment variables to use it and authenticate as 'admin') so for me the command becomes:

python3.11 logging/bin/getlogs.py -n gelcorp

I noticed that the response time of queries was significantly longer with this parameter, but otherwise it works as expected.

 

Specify Output Fields (_id is always appended)

 

By default, CSV output from getlogs.py has the following fields: @timestamp,level,kube.pod,message,_id - the actual fields output for any particular results are printed in a header row before the results. I'm not sure why kube.pod was chosen instead of logsource. The _id field is always output, and it is always appended to the end of each CSV result.

 

Note: If you specify the _id field as one of the values of the --fields parameter, it is output in the results both in the position you specify, and also at the end of each row. There is thus little point in specifying the _id field as one of the values to the --fields parameter, unless for some reason you MUST have it at a particular position. You'll get it anyway at the end of reach row, regardless.

 

The saved search I usually open first on the Discover page in OpenSearch Dashboards is called Log Messages. We can imitate it in getlogs.py with --fields @timestamp level logsource message. (Again, the log document _id will be appended to the end of each results row without us asking for it, and we can't suppress it.):

python3.11 logging/bin/getlogs.py -n gelcorp --fields @timestamp level logsource message

Example results (just showing the first row, for the format - there were 10 rows of results):

Searching index:
Search complete
@timestamp,level,logsource,message,_id
2024-02-01T15:27:36.583Z,INFO,workload-orchestrator,"The job ""1832"" has finished with exit code 0.",37FD93EA-33B6-8986-8AAD-9ABAB4114594

Search for a string

 

To create some log messages with a distinctive message in them, I opened SAS Studio, and ran the following line of SAS code, once to begin with:

%put The quick brown fox jumped over the lazy dog;

Having run that code in a compute session in SAS Studio, the compute server process writes out at least one line of log message containing that string to its container (and thus pod) logs. We should be able to find it with getlogs.py, like this:

python3.11 logging/bin/getlogs.py -n gelcorp --fields @timestamp level logsource message --search 'The quick brown fox jumped over the lazy dog'

Here is the same command, split over several lines with backslash line-continuation characters make it a little easier to read:

python3.11 logging/bin/getlogs.py \
    -n gelcorp \
    --fields @timestamp level logsource message \
    --search 'The quick brown fox jumped over the lazy dog'

Here's the output from running that command (in either the single-line or multi-line form above - they are identical) on my environment. There are two lines of results - two matching log messages in the gelcorp namespace:

Searching index:
Search complete
@timestamp,level,logsource,message,_id
2024-02-01T15:37:14.678Z,INFO,compsrv,80   %put The quick brown fox jumped over the lazy dog;,ABF69779-B0DE-61B7-8ED8-A287D0395576


2024-02-01T15:37:14.678Z,INFO,compsrv,The quick brown fox jumped over the lazy dog,2CD44B04-7E74-7DD1-4A31-8AD54230426F

One log message is the SAS log output echoing the line it is running. The second is the result of running it - the text we asked SAS to '%put' into the program log.

 

Tip: If you want to check the @timestamp values in the results you see are for the correct time period, even though they are given in UTC time, you may wish to check the abbreviated name of the time zone that your host machine is set to use, and the offset that time zone has from UTC, which is used for the timestamps in the results.

From a bash prompt, run date +%Z (with a capital Z) to report your host machine's time zone's abbreviation, e.g. EST, according to the notation used by your host OS. Also, you can run date +%:z (with a lowercase z) to report your host machine's time zone offset from UTC, e.g. -05:00 , which means this host's time zone is five hours behind UTC.

 

Time period

 

getlogs.py outputs messages from the last 1 hour by default. It doesn't (yet) have parameters to specify other relative time spans, like 'last 15 minutes' or 'last 24 hours', but bash has a date tool can do date calculations and format dates as we require them. This article was very helpful in writing the examples which follow below: https://www.unix.com/tips-and-tutorials/31944-simple-date-time-calulation-bash.html

 

Using the --start and --end parameters to getlogs.py, you can specify a custom relative time period fairly easily. Both parameters take a date time string in the format Y-M-D H:M:S, for example: 2024-02-01 15:00:00.

 

Results for the last 15 minutes

 

We add some bash statements to calculate the time now and 15 minutes ago, and format these two times as getlogs.py expects:

datetime_now=$(date "+%Y-%m-%d %T")
echo ${datetime_now}

datetime_15mins_ago=$(date --date "${datetime_now} 15 minutes ago" "+%Y-%m-%d %T")
echo ${datetime_15mins_ago}

# Log messages containing a specific string from gelcorp namespace in the last 15 minutes
python3.11 logging/bin/getlogs.py \
    -n gelcorp \
    --fields @timestamp level logsource message \
    --search 'The quick brown fox jumped over the lazy dog' \
    --start ${datetime_15mins_ago} --end ${datetime_now}

 

Because more than 15 minutes (but less than 1 hour) had passed between the last time I ran my one-line SAS program to put that distinctive message in the logs, and when I first ran the code above, this output was correct:

 

Searching index:
No results found for submitted query.

 

If you include the --search parameter to look for messages containing a specific string, and filter to e.g. the last 15 minutes, you won't see any results if the string has not appeared in any log messages in the last 15 minutes! That's correct.

 

We could either fix that by increasing the time period, or running the code again. I chose to try the latter first, and ran the same one-line SAS program again to have another, more recent log message containing that distinctive string. Then, I re-ran the same block of bash code above, and with more interesting results - the rows are similar to ones we saw in earlier output, but the timestamps and IDs are different:

 

Searching index:
Search complete
@timestamp,level,logsource,message,_id
2024-02-01T16:17:27.010Z,INFO,compsrv,80   %put The quick brown fox jumped over the lazy dog;,AD8056A4-81E2-97D6-3CCA-875A04B0E0A8


2024-02-01T16:17:27.010Z,INFO,compsrv,The quick brown fox jumped over the lazy dog,124E81F5-7BDC-8DB4-FB92-3EBD91A7E6F3

 

Results for the last 2 hours

 

Increasing the time period we search for to the past 2 hours, by modifing the bits of bash script before the call to getlogs.py, we expect to see both sets of results:

datetime_now=$(date "+%Y-%m-%d %T")
echo ${datetime_now}

datetime_2hrs_ago=$(date --date "${datetime_now} 2 hours ago" "+%Y-%m-%d %T")
echo ${datetime_2hrs_ago}

# Log messages containing a specific string from gelcorp namespace in the last 2 hours
python3.11 logging/bin/getlogs.py \
    -n gelcorp \
    --fields @timestamp level logsource message \
    --search 'The quick brown fox jumped over the lazy dog' \
    --start ${datetime_2hrs_ago} --end ${datetime_now}

Results as expected - two pairs of rows matching our search string, one pair from within the last 15 minutes, and a pair from about 40 minutes earlier:

Searching index:
Search complete
@timestamp,level,logsource,message,_id
2024-02-01T16:17:27.010Z,INFO,compsrv,80   %put The quick brown fox jumped over the lazy dog;,AD8056A4-81E2-97D6-3CCA-875A04B0E0A8


2024-02-01T16:17:27.010Z,INFO,compsrv,The quick brown fox jumped over the lazy dog,124E81F5-7BDC-8DB4-FB92-3EBD91A7E6F3


2024-02-01T15:37:14.678Z,INFO,compsrv,80   %put The quick brown fox jumped over the lazy dog;,ABF69779-B0DE-61B7-8ED8-A287D0395576


2024-02-01T15:37:14.678Z,INFO,compsrv,The quick brown fox jumped over the lazy dog,2CD44B04-7E74-7DD1-4A31-8AD54230426F

 

I think this is a good point to wrap up this post - we have not explored all of the filters that getlogs.py offers, so I'd encourage you to look at the output of getlogs,py -h, or explore the great usage examples in the getlogs.py documentation.

 

I am looking forward to seeing the applications we can put this valuable tool to, with scheduled scripts to extract log data for auditing and monitoring purposes. In a follow-up post, I intend to explore what we can do with the OpenSearch domain-specific query language that getlogs.py can use. There's a lot of potential value in this tool, and I hope you find it as useful as I think it is going to be.

 

See you next time!

 

Find more articles from SAS Global Enablement and Learning here.

Version history
Last update:
‎02-16-2024 02:00 PM
Updated by:
Contributors

SAS Innovate 2025: Register Now

Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags