Reference » Integrations » Kubernetes and OpenShift
Conjur v4.9.17 or newer supports pod and service account granularity in machine identity
Conjur v4.9.18 or newer supports Kubernetes and OpenShift integrations, as well as deployment and stateful set machine identity
Conjur Enterprise integrates with the following Kubernetes and Red Hat ® OpenShift container application platforms:
- OpenShift 3.3 and newer
- Kubernetes 1.5 and newer
After deploying this integration, applications running inside a Kubernetes or OpenShift environment can access and retrieve secrets from a Conjur appliance.
This integration securely passes secrets stored in Conjur to applications running in OpenShift or other Kubernetes implementations. Secrets are never exposed to third parties. The Conjur integration provides these features to your OpenShift or other Kubernetes environment:
- End-to-end encryption of secrets through mutual TLS.
- Robust authentication and authorization incorporating Conjur policy, signed certificates, and an internal Kubernetes authenticator.
- Conjur policy provides separation of duties, letting OpenShift security teams control container access while development teams define application requirements.
- Easy deployment of applications across environments and pods.
- Scalability and performance advantages of the Conjur master-follower architecture. Followers provide read-only activity for clients. Scale-out is easy by simply adding more followers.
- Secret rotation, centralized auditing, and all other advantages of Enterprise Conjur.
In the following documentation, unless otherwise noted, all mentions of Kubernetes includes the OpenShift implementation of Kubernetes. All mentions of Kubernetes namespaces intentionally includes the OpenShift concept of Projects.
You deploy the components of the Conjur cluster in its own Kubernetes namespace, each component in its own pod on the same or different nodes. User applications are then deployed in other namespaces. Applications gain access to Conjur through authenticated login orchestrated by an authentication sidecar.
The Conjur cluster components work together to provide high availability. The Conjur component locations can change over time as your deployment evolves.
The Conjur master, standbys, and followers can all be placed in the same namespace, as shown above, or the master and standbys can be outside of Kubernetes entirely. For the highest level of availability and protection against data loss inside the Kubernetes platform, schedule the Conjur master and standby pods to run on separate Kubernetes/OpenShift nodes.
Here are brief descriptions of the components shown in the previous diagram.
In the Conjur namespaces:
- Conjur Master supports full read and write operations, as well as management of policies, secrets, and all Conjur services.
- Conjur Standbys are replicas of the master with the ability to become a fully functional master if needed.
- Conjur Followers are read-only replicas of the master that support application read requests, relieving load from the master. Followers can be scaled horizontally. Add followers to add more read capacity.
- Kubernetes Authenticator is a plugin to the Conjur appliance, enabled on the followers to support application pod authentication.
- Master Service manages access to the HAProxy.
- HAProxy manages access to the master and standbys.
- Follower Service load balances and manages access to the followers.
In the application namespaces:
- Application Containers are your deployed applications.
- Sidecar is deployed in the same pod with each user application. The sidecar assists in authentication, obtaining the access token that allows login to Conjur, and writing the token to the shared volume. This is a continuous process, with a refreshed token value every 5 minutes. An access token has a time-to-live of 8 minutes.
- A shared volume is used to provide the Conjur access token to the application.
- Applications can use Summon or the Conjur API to access secrets.
- Summon is a CyberArk Conjur Open Source tool, used to retrieve secrets from Conjur and push values into either environment variables or a volume mount. On application startup, Summon gracefully waits for the sidecar to provide an access token for authenication with Conjur.
- The conjur-api-go API also waits gracefully for the sidecar to provide an access token, like Summon. For other APIs, the application containers need to check for this file on startup and retry until it exists.
The security-related flow is summarized here:
- Pod Verification - The authenticator uses the Kubernetes API to verify that the pod name is a member of the namespace.
- CSR - The sidecar creates a public-private key pair, keeps the private key, and generates a Certificate Signing Request (CSR).
- Conjur Login - If verification passes, the sidecar makes a login request to the Conjur follower using the pod name, namespace, and the CSR. On successful login, Conjur generates a signed certificate and writes it out of band into the shared memory.
- Conjur Kubernetes Authentication - Conjur uses an internal Kubernetes authenticator and webservice resource to generate an encrypted access token for the certificate. The sidecar uses the certificate's private key to decrypt the access token and write it to shared memory. The authenticator generates a new access token every 5 minutes.
- Authorization - Using the access token, Summon requests secrets from Conjur on behalf of the application. Summon can be configured to put secrets into environment variables or shared memory.
By assigning machine identities to Kubernetes resources, you can use policy to control:
The granularity and identity of the Kubernetes resources that authenticate to Conjur.
The granularity and identity of the Kubernetes resources that can access secrets. The access usually occurs in the context of an application-specific policy, where specific host roles are granted access to specific secrets.
To assign machine identity, you use policy to declare a desired Kubernetes resource as a Conjur host. See About Host Ids for host id syntax for each of the supported Kubernetes resources.
Here are the Kubernetes resources that you can define as Conjur hosts, with guidelines for choosing them.
- Authentication and secret access is by Kubernetes namespace. All resources in the namespace are controlled by the same grants and permissions.
- Use this level of granularity when all applications in the namespace share secrets and access to the secrets can be managed together.
- Authentication and secret access is by Kubernetes deployment name within a namespace. Grants and permissions are specific to a deployment within a namespace.
- Use this granularity for a group of stateless applications that share secrets and access management rules.
- Authentication is by Kubernetes stateful set name within a namespace. Grants and permissions are specific to a stateful set within a namespace.
- Use this granularity for a group of stateful applications that share secrets and access management rules.
- Authentication is by Kubernetes service account name within a namespace. Grants and permissions are specific to a service account within a namespace.
- If you are already using service accounts to control access to secrets, you can build Conjur policy on top of the service account access control. However, be aware that you will then be managing both sets of access control for the same secrets. We recommend using this option as a transition, and move towards using the Kubernetes deployment and stateful set resources as hosts.
- Authentication is by Kubernetes pod name within a namespace. Grants and permissions are specific to a pod within a namespace.
- Use pod name only for testing and proof of concept. Pods tend to stop and restart too often to depend on them for security.
Access to one of the following:
- an OpenShift environment (3.3 or newer) with an internal Docker registry.
- other Kubernetes environment (1.5 or newer)
A license for a Conjur cluster with a master, two standbys, and at least two followers.
conjur-applianceDocker image from your Conjur support representative
As a first step, the deployment scripts assign permissions to the service account as follows:
pods,serviceaccounts [get, list]to namespaces and pods
pods/exec [create, get]on any pods that use authn-k8s to inject signed certificates.
We provide scripts that deploy a Conjur appliance. The scripts deploy a master, two standbys, two followers, and a proxy load balancer, each in a separate pod in the same Kubernetes namespace.
NOTE: For production deployments, we recommend altering this deployment to place at least one standby on a different node.
Visit the appropriate repository, which contains deployment scripts and a README file.
For OpenShift, use:
For Kubernetes, use:
Follow instructions in the README.
The README includes setting the following environment variables, which are referenced later in this document.
CONJUR_ACCOUNT sets the Conjur account name. The account name identifies the Conjur appliance instance and is required during authentication.
one of the following:
CONJUR_PROJECT_NAME sets the OpenShift project name (namespace) where the Conjur appliance is deployed.
CONJUR_NAMESPACE_NAME sets the Kubernetes namespace where the Conjur appliance is deployed.
CONJUR_ADMIN_PASSWORD is the password to use for admin account.
DOCKER_REGISTRY_PATH is the path to the user's internal OpenShift Docker registry.
AUTHENTICATOR_SERVICE_ID is the service_id used to define the authenticator web service in Conjur policy.
Save the following information, printed by the last deployment script:
- The address for accessing the Conjur Master service from within the OpenShift cluster.
- The https URL for accessing the Conjur Master service from outside of the cluster.
- The admin credentials for your Conjur deployment.
To verify that deployment is successful, open the Conjur Master URL in your web browser and login with the admin credentials to access the Conjur Enterprise UI. If you are able to log in, the Conjur cluster has been successfully deployed and configured.
Now that your Conjur cluster is deployed, follow the CLI Installation and Quickstart instructions to perform some initial configuration using the Conjur Master URL and admin credentials provided by the deployment scripts.
This bootstrapping process will provide you with a Conjur user in the
security_admin group, which should now be used for future Conjur interactions (such as loading policy) in order to follow the best practice of least-privilege access.
- Step 1 - Prepare and Load Required Conjur Policy
- Step 2 - Initialize the CA
- Step 3 - Configure Conjur Authenticators
- Step 4 - Deploy the Sidecar Image in the Application Pod
- Step 5 - Add Resources to Application Manifests
- Step 6 - Prepare Application to Retrieve Secrets
- Step 7 - Define Policy for Applications
- Step 8 - Reference Application Policy in the Authenticator Policy
- Step 9 - Load Initial Secret Values into Conjur
- Step 10 - Start up the Application
At least one user needs write permission to load policy and variables into Conjur. This is standard Conjur policy that creates an administrative group of users for Conjur.
Use the following policy as a template:
# initializes users # ted - kubernetes admin # bob - devops admin # alice - db admin # carol - developer - !group kube_admin - !group devops - !group ops - !group db_admin # kube_admin and devops groups are members of the ops admin group - !grant role: !group ops members: - !group kube_admin - !group devops - !user ted - !grant role: !group kube_admin member: !user ted - !user bob - !grant role: !group devops member: !user bob - !user alice - !grant role: !group db_admin member: !user alice - !user carol
You need one Kubernetes Authenticator policy per Kubernetes cluster. It services multiple namespaces and applications. See Configuring a Kubernetes Authenticator for context.
This policy defines:
- The authenticator's service-id.
- The Conjur webservice that generates the CSRs.
- The CA certificate and key.
- A group to represent the hosts and applications that can use this authenticator.
- Permissions for the above group to use this authentication service. (read and authenticate permissions).
- A layer of hosts (namespaces, pods, and service accounts) that can use this authentication service.
- Optional annotations that turn on platform-specific icon use and enable platform-specific searches in the UI.
- A layer of applications that can use this authentication service.
A sub-policy with the
apps id is required.
- !policy id: apps
apps sub-policy defines:
- Hosts that can authenticate. A host is a machine entity that can login to Conjur and authenticate.
- Applications that can authenticate. Policy for user applications is typically defined in separate policy files. This section links your application-specific policies to this Kubernetes authentication policy.
About Application Layer References
apps sub-policy, each application is referenced.
- !grant role: !layer /application-policy-id
This reference links the application to this Kubernetes authenticator policy. It is a reference to a layer that exists in a different policy.
The host ids represent Kubernetes resources. The format of the host id determines the granularity of authentication and secret access that you want to enforce.
[namespace]/*/*= authentication and secret access is by namespace. The two asterisks are wildcards for all Kubernetes resource types and all resource names within the namespace. With this format, all resources in a namespace are controlled by the same grants and permissions.
[namespace]/deployment/[name]= authentication and secret access is by deployment name within a namespace. Grants and permissions are specific to a deployment within a namespace.
[namespace]/stateful_set/[name]= authentication and secret access is by stateful set name within a namespace. Grants and permissions are specific to a stateful set within a namespace.
[namespace]/pod/[name]= authentication and secret access is by pod name within a named namespace. Grants and permissions are specific to a pod within a namespace.
[namespace]/service_account/[name]= authentication and secret access is by service account name within a namespace. Grants and permissions are specific to a service account within a namespace.
The authentication service prepends the namespace in your host id with additional known information, transforming the entire host id value to
When the two asterisks are used to represent all resources in a namespace, the host id becomes
Example host ids
body: - !layer - &hosts # list hosts here - !host id: namespace-1/*/* # namespace-1 authenticates. The wildcards are required. - !host id: namespace-1/deployment/web03 # deployment 'web03' in namespace 'namespace-1' authenticates. - !host id: namespace-2/stateful_set/db # stateful set 'db' in namespace 'namespace-2' authenticates. - !host id: namespace-2/pod/pod-10 # pod-10 in namespace-2 authenticates. - !host id: namespace-3/service_account/sa-20 # sa-20 in namespace-3 authenticates.
Hosts can have these annotations:
kubernetes/authentication-container-name:is required to identify the authenticator
openshift: trueis optional but useful to identify OpenShift hosts in the UI
kubernetes: trueis optional but useful to identify Kubernetes hosts in the UI
The platform-specific annotations let you see the platform type in the UI and filter hosts by platform.
- &hosts - !host id: some-namespace-1/*/* annotations: kubernetes/authentication-container-name: authenticator openshift: true
Kubernetes Authenticator Service Policy Example
Use the following policy as a template. Typically, only the first policy id, the list of hosts, and the list of applications would need to be edited.
- !policy id: conjur/authn-k8s/subcluster-1 # conjur/authn-k8s is required; subsequent components are the service id for the Kubernetes authenticator service. #This is the SERVICE_ID variable set during deployment. body: - !webservice annotations: description: Authentication service for the "subcluster-1" cluster. - !policy id: ca # ca policy - do not change body: - !variable id: cert annotations: description: CA cert for Kubernetes Pods. - !variable id: key annotations: description: CA key for Kubernetes Pods. - !group id: clients annotations: description: Members of this group can use the subcluster-1 authentication service. This group typically has one member, which is a layer containing the enrolled applications. - !permit resource: !webservice privilege: [ read, authenticate ] role: !group clients - !policy id: apps #apps policy - the id must be apps annotations: description: Apps and services in the "subcluster-1" Kubernetes cluster. body: - !layer - &hosts #list hosts here - !host id: some-namespace-1/*/* #host id can represent an entire namespace, controller, pod, or service account. annotations: kubernetes/authentication-container-name: authenticator openshift: true #enables platform-specific UI features #replace with kubernetes: true if appropriate - !grant role: !layer /test-app # references a layer named test-app that exists in a different policy. members: - !host some-namespace-1/*/* - !grant #add all hosts to the apps layer role: !layer members: *hosts - !grant #add the apps layer to the clients group, which has permission to authenticate to Conjur role: !group clients member: !layer apps
Save policy as .yml files in a location accessible to the Conjur master.
Log into Conjur.
Load each policy file:
$ conjur policy load --as-group security_admin policy-file-name.yml $ conjur policy load --as-group security_admin k8s_policy.yml
The Kubernetes policy (described above) declares variables to hold a CA certificate and key. After loading the policy, run the following commands to initialize those resources.
- The value of
SERVICE_IDmust match the service ID in the name of the Kubernetes Authenticator policy defined the section above. For example, if the policy ID is "conjur/authn-k8s/subcluster-1", the value of
- The value of
CONJUR_ACCOUNTmust match the Conjur account used in the Deploy Conjur section above.
SERVICE_ID='**SERVICE_ID**' CONJUR_ACCOUNT='**CONJUR_ACCOUNT**' # Generate OpenSSL private key openssl genrsa -out ca.key 2048 CONFIG=" [ req ] distinguished_name = dn x509_extensions = v3_ca [ dn ] [ v3_ca ] basicConstraints = critical,CA:TRUE subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer:always " # Generate root CA certificate openssl req -x509 -new -nodes -key ca.key -sha1 -days 3650 -set_serial 0x0 -out ca.cert \ -subj "/CN=conjur.authn-k8s.$SERVICE_ID/OU=Conjur Kubernetes CA/O=$CONJUR_ACCOUNT" \ -config <(echo "$CONFIG") # Verify cert openssl x509 -in ca.cert -text -noout # Load variable values conjur variable values add conjur/authn-k8s/$SERVICE_ID/ca/key "$(cat ca.key)" conjur variable values add conjur/authn-k8s/$SERVICE_ID/ca/cert "$(cat ca.cert)"
These commands create a private key and root certificate and store contents of those files in the
Login or auth calls to the webservice will fail if these resources are not properly defined in policy and initialized.
NOTE: The deployment scripts have already performed this step using the value you set in the SERVICE_ID environment variable. You need to be aware of this step to add additional application clusters or additional authenticator types.
CONJUR_AUTHENTICATORS environment variable in the Conjur deployment YAML file defines the authentication types used to authenticate with the Conjur cluster. To enable Kubernetes authentication, use:
where service_id is the id assigned to the authn-k8s webservice in Conjur policy. It is important that the service_id used here match the webservice id declared in the Kubernetes policy.
For example, in this snippet from the Conjur webservice policy, a policy branch named
conjur declares the
authn-k8s service with the service_id of
- !policy id: conjur/authn-k8s/prod/gke
The authentication value would be:
One authn-k8s service can serve multiple application service ids. Additional Conjur policy for hosts and applications will control which namespaces get access to Conjur and which applications get access to specific secrets. There should be a separate authn-k8s policy (and corresponding service id) for each or Kubernetes cluster.
CONJUR_AUTHENTICATORS can include more than one authenticator and more than one authentication type as a comma-separated list. For example, the following shows two authn-k8s services and another unrelated authenticator:
To disable an authenticator, remove it from the list.
Each application requires a sidecar. There is one sidecar per pod. The application container and the sidecar will share memory in the form of a volume mount.
The sidecar image is available on Docker Hub. Deploy the sidecar as part of the application pod manifest by referencing
- Add resources for the authenticator, sidecar, and volume to the
specsection of the application manifest.
# sidecar - image: cyberark/conjur-kubernetes-authenticator name: authenticator env: - name: CONJUR_APPLIANCE_URL value: https://conjur-follower.**namespace**.svc.cluster.local - name: CONJUR_AUTHN_URL value: $CONJUR_APPLIANCE_URL/api/authn-k8s/**auth-service-id** - name: CONJUR_ACCOUNT value: **conjur-account-name** - name: CONJUR_AUTHN_LOGIN value: authn-k8s/deployment/inventory-deployment - name: CONJUR_SSL_CERTIFICATE valueFrom: configMapKeyRef: name: conjurrc key: ssl_certificate volumeMounts: - mountPath: /run/conjur name: conjur-access-token #application - image: **your-application-image** name: **your-application-name** command: ["summon", "rackup"] env: - name: CONJUR_APPLIANCE_URL value: https://conjur-follower.**namespace**.svc.cluster.local - name: CONJUR_ACCOUNT value: **conjur-account-name** - name: CONJUR_AUTHN_TOKEN_FILE value: /run/conjur/conjur-access-token - name: CONJUR_SSL_CERTIFICATE valueFrom: configMapKeyRef: name: conjurrc key: ssl_certificate volumeMounts: - mountPath: /run/conjur name: conjur-access-token readOnly: true #volume for storing the access token volumes: - name: conjur-access-token emptyDir: medium: Memory
CONJUR_APPLIANCE_URL identifies the follower service running in the Kubernetes namespace. The value is
servicenameis "conjur-follower" and
**namespace**is the namespace of the Conjur appliance.
CONJUR_AUTHN_URL identifies the credential service being used to log into Conjur. Use
$CONJUR_APPLIANCE_URL/api/authn-k8s/**auth-service-id**, where service-id is the authenticator's service-id from policy.
CONJUR_ACCOUNT is the account name designated to the Conjur appliance during initial configuration. You most likely set this environment variable before running the deployment scripts. If so, you can use
$CONJUR_ACCOUNTfor the value here.
CONJUR_AUTHN_LOGIN identifies the Conjur host (Kubernetes resource) that will login to Conjur. Set this value to a host id that is defined in policy. See About Host Ids for the host id syntax and the list of Kubernetes resources that can be declared as hosts.
CONJUR_AUTHN_TOKEN_FILE identifies the complete path and filename where the sidecar should write the Conjur access token that it obtains on behalf of the application.
CONJUR_SSL_CERTIFICATE is the public SSL certificate value required for connecting to the Conjur follower service. We recommend using a ConfigMap to store the value.
The SSL certificate is generated during Conjur appliance configuration and stored in a .pem file located in the root folder where Conjur was created. The file name is
conjur-account.pem, where account is the account name provided for the Conjur appliance.
For example, use this Kubernetes command:
kubectl create configmap conjur-cert --from-file=ssl-certificate="/path/to/ssl/cert"
The equivalent OpenShift command is:
oc create configmap conjur-cert --from-file=ssl-certificate="/path/to/ssl/cert"
creates this ConfigMap and loads the certificate value into it:
configMapKeyRef: name: conjur-cert #ConfigMap name key: ssl_certificate #the key into the ConfigMap
volumeMounts identifies the location of the access token for logging into Conjur.
There are two options for applications to retrieve secrets from Conjur.
Use the Conjur API to retrieve secrets from Conjur
Using API calls, the application gets the access token from the shared volume (/run/conjur/access-token), authenticates to Conjur, and then requests secrets.
The access token may not be available immediately on application startup. The application may need to wait for the volume to mount, the authentication to occur, and the access token to be written into the shared file.
NOTE: The conjur-api-go API handles the wait gracefully and seamlessly. For other APIs, the application containers should check for the file on startup and retry until the file exists.
See the sidecar README for more information about the API and the access token. The same sidecar is used for both Kubernetes and OpenShift integrations.
See our demo repositories for examples of complete Conjur deployments and test applications that use the API to get secrets from Conjur.
Use Summon to retrieve secrets from Conjur on behalf of the application
- Summon is a CyberArk Conjur tool that reads a file in secrets.yml format, obtains secrets from a source, and injects the secret values as environment variables into any process. Once the process exits, the secrets are gone.
- Summon uses source-specific providers to fetch secrets. For the Conjur Kubernetes integration, applications use the
- Summon gracefully and seamlessly handles the wait for the access token. If your application is using Summon with the
summon-conjurprovider to get secrets, and has the environment variable
CONJUR/_AUTHN/_TOKEN/_FILE=/run/conjur/access-tokenset, then Summon will keep retrying until that file exists.
- See Summon documentation for information about installation, configuration, the summon-conjur provider, and the secrets.yml file format.
Example: secrets.yml for Summon
The following example of a
secrets.yml file shows several types of allowed entries.
DB_USERNAME: !var my-app/db/username DB_PASSWORD: !var my-app/db/password REGION: us-east-1 SSL_CERT: !var:file ssl/certs/private
- Lines 1 and 2 specify variables with pathnames containing a secret. In this case, assume that a policy with an id of
my-appwas loaded into Conjur.
- Line 3 specifies a literal string for the secret.
- Line 4 specifies a variable that is a file containing the secret. The contents of the file would also be retrieved from Conjur.
Application policy defines:
- Secrets (as Conjur variables) used by the application
- Permission for at least one human user or group for write access to the secrets (to load the initial value)
- Permission for application instances to read the secrets
Each application has its own Conjur policy, usually each in a separate policy file.
Use the following policy as a template.
- !policy id: test-app owner: !group developers body: - !layer - !policy id: test-app-db owner: !group operations body: - &variables - !variable password - !permit resources: *variables privilege: [ read, execute ] role: !layer /test-app #reference this layer in Kubernetes policy when adding relevant hosts
Load the application policy:
$ conjur policy load --as-group security_admin policy-file-name.yml
Remember to add your application layer (/webapp in the above example policy) in the
apps sub-policy of the authenticator policy. See About the apps Sub-Policy Id.
For each secret defined in an application policy, load the initial secret value. You can use the Conjur API, the UI, or the CLI for this step.
Here is an example using the CLI:
$ conjur variable value cluster-1/db/password abc$xyz
To start the application, use this Kubernetes command:
kubectl create -f your-manifest.yaml
The equivalent OpenShift command is:
oc create -f your-manifest.yaml
Conjur is a robust Enterprise solution with high availability, failover, backup, and restore features to handle downtime gracefully and without data loss. Kubernetes environments can be rebooted, restarted, or fail unexpectedly without causing data loss in a Conjur cluster deployed inside Kubernetes namespaces. In the worst case scenario, where the entire cluster is lost, no data loss occurs if you maintain the recommended backup schedule.
Conjur High Availability
A Conjur High Availability cluster includes standby masters that are ready to take over if the master becomes unavailable. Manual and automatic failovers are possible. See High Availability.
For added protection, deploy the cluster using different nodes for master and standbys.
Consider the following scenarios where the status of Kubernetes pods might affect the Conjur cluster:
- Lose the master - If the master is not operational, no secret rotations or writes can happen. There is no data loss.
- Lose the master with healthy standbys - A standby is promoted to master and operations proceed normally. There is no data loss during standby promotion.
- Lose the standbys (and not the master) - There is no data loss, although the master might not support writes without a standby.
- Lose the master and the standbys - There is no data loss, although no writes can happen, and system degradation occurs. The followers continue to serve read requests.
- Lose the entire Kubernetes environment - Recreate the Conjur cluster from a backup of the master. There is no data loss if the backup schedule is more frequent than your most frequent secret rotation schedule. The need for a consistent and conservative backup schedule is required in any Enterprise environment.
Backups and Disaster Recovery
To facilitate disaster recovery, follow these recommendations:
- During initial configuration, encrypt the master using a master key file. See [Encrypt Using a Master Key File] (/server_setup/masterkeyencryption.html#encrypt-master-key).
- Backup the master using the master key file. See Master Key Backup and Restore.
- Develop and follow a backup schedule that is more frequent than your most frequent secret rotation schedule.
- Store the backup files securely, outside of the production Kubernetes environment that you are backing up.
- If restore is needed, restore using the master key file. See Master Key Backup and Restore.
Contact support if you have additional questions.
All read and write activity performed on the master or any follower is captured in log files. Recent log entries are available in the UI for viewing. Conjur does not manage long-term storage of audit entries.
We recommend that you establish procedures to capture and store audit entries. Features are available for filtering log records and for adding custom events to logs.
See /reference/services/audit for information.
The Conjur UI shows Conjur resources defined by loaded policy. You can view loaded policy, users, hosts, webservices, secrets (if authorized), health, and more.
Open a web browser and access the Conjur Enterprise UI:
where the master endpoint is the address provided by the last deployment script for access outside of the cluster.
The Conjur Enterprise UI login page should appear.
Log in using a Conjur username and password.
Click Hosts to see the hosts defined by policy and their status. OpenShift or Kubernetes hosts are identified with appropriate icons. You can see an OpenShift host in the figure below.
Type "openshift" or "kubernetes" in the search box to filter the UI display to show only resources associated with a specific platform type.
NOTE: The platform-specific search and icon features depend on correct annotations in policy. See About Host Annotations.