trlogic Silver Logo

Linkerd Traffic Split

trlogic Silver Logo

You can shift a portion of incoming traffic to different destination services with linkerd’s traffic split functionality. This feature becomes handy when you want to deploy a newer version of your service by increasing traffic slowly while making sure everything is alright. (Canary Release, Blue Green Deployment etc.)


To follow this lab you’ll need linkerd installed on your cluster. Follow this guide if you didn’t already.

In this lab we’ll use two docker images (oktano/sample-pizza-server-app:v2, oktano/sample-pizza-client-app:v1). Pizza client generates traffic on pizza server. Pizza server takes two environment variables so we can adjust success rate and latency of it. For initial state set success rate to 10 (which means 100%) and latency to 100ms.

You can run commands below to get to initial state.

kubectl create ns pizza-app

# Add annotation for linkerd's automatic proxy injection

kubectl annotate ns pizza-app

# Deploy pizza server application

kubectl run pizza-server-v2 --image oktano/sample-pizza-server-app:v2 --env SUCCESS_RATE=10 --env MAX_SLEEP_DURATION=100 -n pizza-app

# Expose pizza server application

kubectl expose deployment pizza-server-v2 -n pizza-app --port 5000 --target-port 5000

# Deploy pizza client application

kubectl run pizza-client --image oktano/sample-pizza-client-app:v1 --env CALL_CENTER_URL=http://pizza-server-v2:5000 -n pizza-app

Wait for a little bit and verify that your deployment’s success rate is 100%.

linkerd -n pizza-app stat deploy

# I removed unrelevant columns from output

NAME             SUCCESS

pizza-client        -

pizza-server-v2  100.00%

Lets deploy our ‘newer version’ of pizza-server. We’ll use same image with different success rate and shift some portion of traffic to it. Deploy and expose ‘v3’ with command down below. You can set success rate and max sleep duration to whatever you want.

kubectl run pizza-server-v3 --image oktano/sample-pizza-server-app:v2 --env SUCCESS_RATE=0 --env MAX_SLEEP_DURATION=100 -n pizza-app

kubectl expose deployment pizza-server-v3 -n pizza-app --port 5000 --target-port 5000

Now shift some traffic to v3. Create a TrafficSplit configuration and linkerd will take care of the rest. Again, you can play with these values. I’ll shift 10% of traffic to v3 for start.

cat <<EOF | kubectl apply -f -


kind: TrafficSplit


 name: traffic-split-v3

 namespace: pizza-app


 service: pizza-server-v2


 - service: pizza-server-v2

   weight: 900m

 - service: pizza-server-v3

   weight: 100m


After that linkerd will slowly start to shift traffic to pizza-server-v3 service. You can see it in dashboard. Check the RPS values.

Linkerd shifting traffic to new version

pizza-server-v2 deployment’s success rate is still 100%. Because it is a seperate deployment from pizza-server-v3. To see overall success rate from pizza-client deployment’s point of view run:

linkerd -n pizza-app routes deploy/pizza-client --to service/pizza-server-v2

ROUTE        SERVICE            SUCCESS   ...

[DEFAULT]    pizza-server-v2    89.17%    ...


To remove everything from this lab use kubectl delete ns pizza-app command.