BookmarkSubscribeRSS Feed

Configure an external PostgreSQL for SAS Viya in your Open-Source Kubernetes deployments – part 2

Started ‎01-08-2024 by
Modified ‎01-08-2024 by
Views 1,283

In the first part of the series, we focused on the reasons to choose an external PostgreSQL instance for the SAS Viya Data Infrastructure Server and how to provision it with the IaC tool for Kubernetes.

 

In this 2nd article, we detail how the viya4-deployment GitHub project (aka DaC tool) can now be configured to deploy SAS Viya (with an external PostgreSQL server).

 

Then we'll open up on additional considerations around the utilization of an external PostgreSQL instance for the Viya platform (HA, connections, tuning , technical benefits).

 

Finally, in the last section of the article, we compare the Kubernetes cluster CPU and memory resource usage and reservation between two environments (one using the internal Crunchy Data Server and one using the external PostgreSQL instance running on a distinct VM).

 

How to configure SAS Viya for it (with the DaC tool)

 

The option to use an external PostgreSQL server as the SAS Infrastructure Data Server has been available for some time.

 

This excellent article from Edoardo Riva is the perfect starting point to understand how to configure Viya to use it.

 

Basically, you have to provide your Viya environment with some connection information and credentials, so all the Viya components that need to interact with the PostgreSQL Server database (write or read data from the SAS Infrastructure Data Server) can do so.

 

Here, we just look at an example in the case of an Opensource Kubernetes platform with the most natural companion to the IaC, the viya4-deployment tool (aka "DaC" for "Deployment As Code").

 

PostgreSQL Server details

 

The variable used to define the PostgreSQL details (internal or external) in the DaC tool is named V4_CFG_POSTGRES_SERVERS and is actually a "map" of objects.

 

The variables documentation page shows the various available options, such as the server FQDN and port, the database name, the size of the internal PostgreSQL and pgBackrest PVCs, etc…

 

Note that several SAS Viya platform offerings require a second internal Postgres instance referred to as SAS Common Data Store or CDS PostgreSQL (the CDS PostgreSQL can also be provisioned by the IaC tool, as noted in the first part of the series).

 

Here is an example for an external PostgreSQL server :

 

# External servers
V4_CFG_POSTGRES_SERVERS:
  default:
    internal: false
    admin: pgadmin
    password: "password"
    fqdn: mydbserver.local
    server_port: 5432
    ssl_enforcement_enabled: true
    database: SharedServices
  other_db:
    internal: false
    admin: pgadmin
    password: "password"
    fqdn: 10.10.10.10
    server_port: 5432
    ssl_enforcement_enabled: true
    database: OtherDB


Trusted certificate for the PostgreSQL Server

 

In order to encrypt the communications between the Viya "clients" (all services needing to persist or request information in the SAS Infrastructure Data Server) and the PostgreSQL service we also need to provide the PostgreSQL certificate file location, so the DaC tool can add it in the SAS Viya trust store.

 

rp_1_TRUSTED-CERT.png

Select any image to see a larger version.
Mobile users: To view the images, select the "Full" version at the bottom of the page.

 

The PEM-encoded certificate file that we need is on the PostgreSQL server and, as shown in the first part of the article, its location can be found in the PostgreSQL settings.

 

SELECT Name,Setting from pg_settings WHERE category='Connections and Authentication / SSL';

rp_2_sslcertfile-1024x499.png

 

So, in this case the V4_CFG_TLS_TRUSTED_CA_CERTSvariable in the DaC ansible vars file should point to the file located on the external PostgreSQL server in /etc/ssl/certs/ssl-cert-sas-rext03-0038.pem.

 

External PostgreSQL Considerations

 

HA PostgreSQL

 

rp_3_HA-pg-schema-300x163.png

The SAS Infrastructure Data Server component is a critical component of the Viya platform… if, at some point, it fails or is not working properly, the entire Viya platform stop working as designed.

 

So, running it as a "stand-alone" service would introduce a major SPOF (Single Point Of Failure) in the Viya platform architecture.

 

One of the benefits of the internal PostgreSQL server implementation with the Crunchy Data PostgreSQL operator is that we automatically have a clustered installation of PostgreSQL (1 primary and 2 replicas) which makes the SAS Infrastructure Data Server component Highly-Available.

 

Now the question is How can we have the same High-Availability with external PostgreSQL ?

 

There are various possibilities.

 

The external PostgreSQL implementation also matters.

 

As of today, according to the official  documentation we support the following options for the external  Postgres implementation:

 

  • PostgreSQL (Open Source)
  • Microsoft Azure Database for PostgreSQL - Flexible Server
  • Amazon RDS for PostgreSQL
  • GCP Cloud SQL for PostgreSQL

If you decide to use one of the supported Cloud managed external PostgreSQL database, you need to check their current documentation. However, they generally propose options for High Availability (HA) configurations with automatic failover capabilities.

 

For example in Azure, with the PostgreSQL Flexible server, “When high availability is configured, flexible server automatically provisions and manages a standby. Write-ahead-logs (WAL) is streamed to the replica in synchronous mode using PostgreSQL streaming replication.”

 

An alternative is to use the Open Source version of PostgreSQL.

 

It can be installed on a standalone VM. And that’s exactly what is done when using the IaC for Kubernetes tool. But it would not provide any High Availability by default. In order to have HA, the PostgreSQL should be deployed in a cluster of at least 3 machines and be complemented with something like PGPool for the request load-balancing and a virtual IP address (this is similar  to the Viya 3.5 architecture). Implementing such kind of setup is not trivial.

 

Another possibility would be to rely on Kubernetes itself to provide this HA capabilities by deploying  PostgreSQL with an operator. The operator could be deployed in the same Kubernetes cluster as Viya.

 

Here is a list of PG Operators implementation that could potentially be used:

 

(credits to @Carus Kyle for providing this list 😊)

 

While the experience and feedback on these options are still limited, they will likely allow the installation of a Highly-Available PostgreSQL server. It would be up to the customer team to ensure the ongoing operation of this PostgreSQL server. While these implementations are opensource, most of them (such as CloudNativePG or Percona) can come with a "full support" model provided by specialized vendors.

 

Pgbouncer

 

rp_4_pgbouncer-schma.png

One of the challenges when using a Cloud Managed Service for the external PostgreSQL is to ensure that the maximum number of connections to the database is high enough for SAS Viya (we have a lot of microservices that need to open connections with the PostgreSQL database).

 

As stated in the official requirements : “An external PostgreSQL server should support a maximum number of connections, max_connections on some providers, and max_prepared_transactions of at least 1024.”

 

Cloud providers sometimes apply limits and thresholds depending on the instance type that underpins the database, including on “max_connections”. For example, it used to be problematic with the Azure Database for PostgreSQL - Single Server as higher (and very costly) pricing tier was required to meet this requirement.

 

Today though, the default Azure Database for PostgreSQL implementation is "Flexible Server" and it supports up to 5000 connections. It is much better, however even with 5000 connections, it might not be enough for  some  large multi-tenant environments.

 

That’s where PGBouncer can help. You might have heard about it, it is also briefly mentioned in our SAS Viya tuning guide.

 

PGBouncer is a connection pooler for PostgreSQL. It allows a larger number of frontend connections to share a smaller pool of backend connections. The aim of PGBouncer is to lower the performance impact of opening new connections to PostgreSQL, by re-using database connections.

 

Finally, note that Kubernetes Operators for Postgres, such as Zalando usually include the PGBouncer component by default.

 

Technical Benefits?

 

There are several benefits in the utilization of an external PostgreSQL implementation for the SAS Data Infrastructure server. Here is a screenshot of a colleague’s comment (in an internal discussion about pros and cons of the "external postgres" decision) that I found quite enlightening :

 

rp_5_advantages-1.png

 

Finally, from my perspective I was also curious to know if, there was a significant reduction for the Kubernetes Hardware requirement when the SAS infrastructure Data server was located outside of the Kubernetes cluster? So it made me wonder “What is the impact on the cluster resources usage of running PostgreSQL outside of the Kubernetes cluster ?”

 

Well, actually it is the question that I have tried to answer in the next section 😊

 

Resource consumption comparison

 

When running the OOTB Crunchy Data Server in the Viya namespace, we have 3 PostgreSQL pods corresponding to our clustered instances as well as the Postgres Operator pod and a PgBackRest instance pod when backups are running.

 

So, surely replacing that with an external PostgreSQL outside of the cluster will reduce the overall Viya CPU and memory footprint on the cluster itself !?

 

Since I was very curious about that, I made some comparisons using k9s and some of the Grafana Dashboards coming with the  SAS Viya 4 Monitoring for Kubernetes.

 

I deployed the same Viya order in 2 Upstream Opensource Kubernetes environments : one was deployed with the internal Crunchy Data Server and the other was using an extra machine with an external PostgreSQL Stand-alone deployment (outside of the Kubernetes cluster).

 

Please, note that these are very simple, "one time test", and the values reported there were varying overtime. Also the comparisons are made when the system idle, as we are not generating user activity.

 

It is clearly not something as thorough and systematic as an official benchmark done by our Performance teams at SAS. However, I found the results quite insightful and I believe it gives a rough idea of the differences in terms of resource reservation and utilization.

 

Kubernetes objects, overall CPU and memory utilization

 

With Crunchy Data Server ("Internal PostgreSQL")

 

rp_6_k9s-intPG.png

 

Without Crunchy Data Server ("External PostgreSQL")

 

rp_7_k9s-extPG.png

 

As we can see with the k9s "pulses" view, running PostgreSQL outside of the cluster reduces the number of Kubernetes objects (Pods, ReplicaSets, StatefulSets, PersistentVolumes,). In our case the overall impact of the CPU and memory used is low, however it is one data point for a single measure in an idle and empty environment.

 

Overall resources usage

 

With Crunchy Data Server ("Internal PostgreSQL")

 

rp_8_grafana-overall-intPG.png

 

Without Crunchy Data Server (“External PostgreSQL”)

 

rp_9_grafana-overall-extPG.png

 

While the overall cluster differences in terms of usage for the CPU and memory are not really significant (around 1%), there is a slightly bigger difference in the resource requests commitments (49.9% vs 53.2% for the CPU). Less CPU and memory resource are "reserved" on the nodes when the external PostgreSQL Server is used.

 

CPU utilization for the Viya namespace

 

With Crunchy Data Server ("Internal PostgreSQL")

 

rp_10_grafana-cpu-intPG.png

 

Without Crunchy Data Server ("External PostgreSQL")

 

rp_11_grafana-cpu-extPG.png

 

Now if we specifically zoom on the Viya namespace ("dac"), we clearly see the difference in the CPU Requests between the 2 environments (9.23% vs 6.57%).

 

MEMORY utilization for the Viya namespace

 

With Crunchy Data Server ("Internal PostgreSQL")

 

rp_12_grafana-mem-intPG.png

 

Without Crunchy Data Server ("External PostgreSQL")

 

rp_13_grafana-mem-extPG.png

 

Finally, the two Grafana graphs confirms a relatively low difference on the memory usage in an idle/empty environment.

 

Here are the main takeaways of this first and quick comparison :

  • The amount of CPU and memory resources reserved on the Kubernetes is reduced when PostgreSQL is running outside of the Kubernetes cluster (each of the four Crunchy Data PostgreSQL Instance requests almost half of a CPU and 3GB of memory),
  • ...but it remains a relatively small reduction in the overall Viya Platform resources requirements.
  • In an idle, freshly deployed environment without any user’s activity, there is not significant difference in the CPU and memory utilization between the two environments.

Conclusion

 

And that’s all for this time.

 

I hope this  series was useful to reiterate some of the established architecture principles and configurations for the SAS infrastructure Data Server component … Thanks for reading !

Comments

Thank you for this article, at this moment we are intending to stay with internal crunchy database as long as it will be provided by SAS because it is less complicated for us as a customer and it provides  high availability. However, an interesting point in the article is, that there are HA capabilities by deploying  PostgreSQL with an operator. In simple words, if you use such operator, do you also need three databases? 

Hi @touwen_k 

Yes the PostgreSQL database can be deployed inside the Kubernetes cluster and managed by a Kubernetes operator. While is it not mandatory, most of the Postgres Kubernetes operator opensource projects (ex: Zalando, CloudNativePG, etc...) offer the possibility to provide HA by running replicas of the primary database instance (in the same way as the Crunchy Data operator already does with the current Internal postgres implementation).

Version history
Last update:
‎01-08-2024 04:14 AM
Updated by:
Contributors

SAS Innovate 2025: Save the Date

 SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!

Save the date!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started

Article Tags