Getting Started with Istio 1.0 Service Mesh on Kubernetes
Istio 1.0 dropped earlier this week and I've been heads-down playing with it on a test cluster. I've been watching this project for a while — the 0.x releases were a bit rough around the edges — but 1.0 feels like it's finally ready to take seriously. Here's what I've learned so far.
What Even Is a Service Mesh
A service mesh is infrastructure for service-to-service communication. Instead of every application handling its own retries, timeouts, circuit breaking, and TLS, you push all of that down into a sidecar proxy that runs alongside each service. The app talks to localhost, the proxy handles the rest.
Istio does this with Envoy sidecars, and it gives you three things I actually care about:
- mTLS between services — mutual TLS, automatically, without touching app code. Your services authenticate each other at the network level.
- Observability — distributed tracing, metrics, and access logs fall out automatically because every request flows through a proxy.
- Traffic management — canary deployments, blue/green, fault injection, all via Kubernetes CRDs. No app changes required.
That last one is the selling point for me. I've done canary deploys with Nginx config hacks and custom header routing in the app layer. Doing it with a VirtualService is so much cleaner.
Installing Istio 1.0 on Kubernetes
Grab the release and install with istioctl. I'm running this on a Kubernetes 1.10 cluster on GKE.
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.0.0 sh -
cd istio-1.0.0
export PATH=$PWD/bin:$PATH
# Install via Helm (renders manifests then applies)
helm template install/kubernetes/helm/istio \
--name istio \
--namespace istio-system \
--set global.mtls.enabled=true \
--set tracing.enabled=true \
--set grafana.enabled=true \
--set kiali.enabled=true \
> istio.yaml
kubectl create namespace istio-system
kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
kubectl apply -f istio.yaml
Wait for everything to come up — it takes a minute:
$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS
istio-citadel-6bbf9fdf5d-kzt2r 1/1 Running 0
istio-egressgateway-7d4697f66-2gghw 1/1 Running 0
istio-ingressgateway-76c95b4b9f-5kl8m 1/1 Running 0
istio-pilot-67dbbf8c7-9xnx8 2/2 Running 0
istio-policy-7ff5d97d55-v7pch 2/2 Running 0
istio-telemetry-6bfcb7d89-7s9bf 2/2 Running 0
istio-tracing-ff94688bb-l9xzz 1/1 Running 0
prometheus-f556886b8-qnpmt 1/1 Running 0
Enabling Sidecar Injection
The whole thing works by injecting an Envoy sidecar container into every pod. You can do this manually, but automatic injection is the way to go — label your namespace and forget about it:
kubectl label namespace default istio-injection=enabled
Now any pod deployed into the default namespace gets an istio-proxy container injected automatically. You can verify it's working:
$ kubectl get pod my-app-6d8f7b9c4-xkp2l -o jsonpath='{.spec.containers[*].name}'
my-app istio-proxy
There's your sidecar. The app container doesn't know it's there.
Deploying a Sample App
Let me deploy a simple two-service app — a frontend that calls a backend API — and watch what happens:
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
$ kubectl get pods
NAME READY STATUS RESTARTS
details-v1-68c7c8666d-9bvhj 2/2 Running 0
productpage-v1-7488d5b96f-w4s2r 2/2 Running 0
ratings-v1-849dcb8cf9-nqrbp 2/2 Running 0
reviews-v1-5bb9f8f86-4p7fb 2/2 Running 0
Notice 2/2 — two containers per pod. That's the app container plus the Envoy proxy. The mesh is live.
Verifying mTLS
This is the part I wanted to see most. You can check the mTLS status between any two services with istioctl:
$ istioctl authn tls-check productpage-v1-7488d5b96f-w4s2r.default details.default.svc.cluster.local
HOST:PORT STATUS SERVER CLIENT
details.default.svc.cluster.local:9080 OK mTLS mTLS
STATUS: OK with both server and client showing mTLS means the connection is mutually authenticated and encrypted. No code changes. No certificate management in the app. Citadel (Istio's CA component) handles cert rotation automatically every 3 months by default (configurable).
Traffic Splitting with VirtualService
Here's where it gets fun. The bookinfo app has three versions of the reviews service. Let's send 90% of traffic to v1 and 10% to v2 — a canary rollout:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 90
- destination:
host: reviews
subset: v2
weight: 10
kubectl apply -f reviews-virtualservice.yaml
That's it. No Nginx config. No custom load balancer rules. The split is enforced at the Envoy layer. You can watch it in Grafana or Kiali in real time as requests flow through.
Thoughts After a Week
Istio 1.0 is genuinely impressive. The mTLS story alone is worth it for security-conscious teams — getting encryption and mutual auth between services without touching application code is a big deal. The traffic management primitives are powerful and the observability you get for free is better than most things I've set up manually.
That said, it's not without rough edges. The resource footprint is non-trivial — all those sidecars add up. On a large cluster, the Pilot performance has historically been a concern, though 1.0 includes a lot of fixes there. And the CRD API surface is large; there's a learning curve.
I'd start with it on a non-critical cluster and get comfortable with the concepts before rolling it into production. But I'm planning to do exactly that. The zero-code-change security and observability story is too compelling to ignore.
