Confluent Cloud (CC)
Confluent Cloud is a resilient, scalable, streaming data service based on Apache Kafka, delivered as a fully managed service.
Platform services confluent cloud is managed by Kolibri team.
1. Confluent Cloud set up
User Interface (UI)
In order to get access to the UI, submit a Jira request: OKTA_UKI_KafkaUI_RW for read/write access or OKTA_UKI_KafkaUI_RO for read only access.
Links to access UI: Dev and Prod
Confluent Cloud Interface
The Confluent Cloud UI can access information for both Dev and Prod. You can request access using the following links: - Confluent Cloud UI - Access Request
Brokers
cc endpoint in dev: lkc-zg6w6d-69eomp.eu-west-1.aws.glb.confluent.cloud:9092
cc endpoint in prd: lkc-195pyv-69m5l5.eu-west-1.aws.glb.confluent.cloud:9092
Topics
Topics are created by submitting a PR to this repo and asking for a merge in #confluent-cloud slack chanel. All topics should have an env as a prefix (for example qa.sbg.relevant.events).
GroupIDs
All consumer groups should have cbk as a prefix, for example cbk.fred.local.
Service Account (SA) Keys
Access to individual topics is regulated through the SA keys - each SA key has consumer/producer privileges and access to certain topics (one or many). Keys are created by the DevOps team, request new keys from the slack chanel #uki-stp-devops.
Keys are not rotated automatically. When required, request new set of keys from the channel above and rotate them manually. Old keys are not deleted until a request has been submitted to the same chanel.
The SA secrets are automatically created in SBG Vault after the PRs are merged into the cc-global-iam repo. We can't access it directly, so we need to ask an engineer in the #uki-stp-devops channel to send us the SA secrets. Each TLA (DEC, TIM, and FRED) has its own SA key in each AZ (dev or prd) to read/write the selected topics. This means the TLA in the qa and nxt environments is managed by one SA key, while the TLA in the drk and prd environments is managed by another SA key. There are a total of 6 SA keys for the 3 TLAs:
- test-dec-producer-sa
- test-tim-producer-consumer-sa
- test-fred-consumer-sa
- prod-dec-producer-sa
- prod-tim-producer-consumer-sa
- prod-fred-consumer-sa
AWS secrets managers
The SA secret can be created or updated using either the AWS console or AWS CLI. If you prefer to create the secret in AWS Secrets Manager, please ensure it has the following 4 tags: Platform:Environment, Platform:BusinessVertical, Platform:CostCode, and Platform:TLA.
If you want to create or update an AWS secret using the AWS CLI, you can copy the script below to your local machine into a bash file (e.g. manage_secret.sh) and run the command lines:
Please run aws sso login --profile
#!/bin/bash
## Please run `aws sso login --profile <profile-name>` before running this script
# Usage for creating a secret:
# ./manage_secret.sh <secret-name> <api-key> <api-secret> <profile> <tla> <environment>
# <secret-name> : The name of the AWS Secret to be created.
# <api-key> : The API key that will be stored in the secret.
# <api-secret> : The API secret that will be stored in the secret.
# <profile> : The AWS CLI profile to use for authentication
# <tla> : The lowercase TLA
# <environment> : The environment where the secret will be created (must be either "dev" or "prd").
# Usage for updating a secret:
# ./manage_secret.sh <secret-name> <api-key> <api-secret> <profile>
# <secret-name> : The name of the existing AWS Secret to update.
# <api-key> : The new API key to update in the secret.
# <api-secret> : The new API secret to update in the secret.
# <profile> : The AWS CLI profile to use for authentication.
SECRET_NAME="$1" # Name of the secret in AWS Secrets Manager
API_KEY="$2" # API key value to store or update
API_SECRET="$3" # API secret value to store or update
PROFILE="$4" # AWS CLI profile name for authentication
TLA="$5" # The lowercase TLA
ENVIRONMENT="$6" # Deployment environment ("dev" or "prd", required for secret creation)
# Default tags
BUSINESS_VERTICAL="exchange"
COST_CODE="41441"
# Function to create a new secret
create_secret() {
echo "Creating secret: $SECRET_NAME in $ENVIRONMENT environment..."
aws secretsmanager create-secret \
--name "$SECRET_NAME" \
--description "SA $$SECRET_NAME to access CC topics for TLA $TLA" \
--secret-string "{\"api-key\":\"$API_KEY\",\"api-secret\":\"$API_SECRET\"}" \
--tags "[{\"Key\":\"Platform:Environment\",\"Value\":\"$ENVIRONMENT\"}, \
{\"Key\":\"Platform:BusinessVertical\",\"Value\":\"$BUSINESS_VERTICAL\"}, \
{\"Key\":\"Platform:CostCode\",\"Value\":\"$COST_CODE\"}, \
{\"Key\":\"Platform:TLA\",\"Value\":\"$TLA\"}]" \
--profile "$PROFILE" > /dev/null
echo "Secret created successfully!"
}
# Function to update an existing secret
update_secret() {
echo "Updating secret value for: $SECRET_NAME..."
aws secretsmanager put-secret-value \
--secret-id "$SECRET_NAME" \
--secret-string "{\"api-key\":\"$API_KEY\",\"api-secret\":\"$API_SECRET\"}" \
--profile "$PROFILE" > /dev/null
echo "Secret updated successfully!"
}
# Check if the secret exists
echo "Checking if secret exists in AWS Secrets Manager..."
aws secretsmanager describe-secret --secret-id "$SECRET_NAME" --profile "$PROFILE" > /dev/null 2>&1
if [ $? -ne 0 ]; then
# Secret not found, create a new one
create_secret
else
# Secret exists, update it
update_secret
fi
2. Connecting to Confluent Cloud through the command line
kcat -C -b 'lkc-195pyv-69m5l5.eu-west-1.aws.glb.confluent.cloud:9092' -t 'topic.name' -X security.protocol=sasl_ssl -X sasl.mechanism=PLAIN -X sasl.username=cc-key -X sasl.password=cc-password
3. Application set up
Secrets
Secrets are retrieved from AWS secrets manager, example code:
externalSecret:
enabled: true
refreshInterval: 1m
secretStore:
create: true
files:
fred-kafka-cc:
data:
KAFKA_CC_KEY:
remoteRef:
key: test-fred-consumer
property: kafka-cc-consumer-key
KAFKA_CC_PASSWORD:
remoteRef:
key: test-fred-consumer
property: kafka-cc-consumer-password
rbac:
enabled: true
serviceAccount:
enabled: true
policyDocument: >
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:ListSecrets"
],
"Resource": "*"
}
]
}
Env variables
In order to connect to the Confluent cloud kafka client needs to have 3 additional properties:
1. Security protocol: KAFKA_SECURITY_PROTOCOL: SASL_SSL
2. SASL mechanism: KAFKA_SASL_MECHANISM: PLAIN
3. JAAS config: KAFKA_SASL_JAAS_CONFIG: org.apache.kafka.common.security.plain.PlainLoginModule required username="$(KAFKA_CC_KEY)" password="$(KAFKA_CC_PASSWORD)";
Flink services
There are additional requirements for those Flink services that produce EXACTLY_ONCE:
- The keys that are used need to have permissions to read and write transactional ids for checkpointing
- When migrating from the old cluster to the cc it's important to change the flink job name (checkpointing remembers information about the previously used topic)