What I learned today Nov 2nd 2018
Prologue
The plan was to get some logging infrastructure setup in kubernetes. I have never done this before, so I kep my mind open. Stuff started falling out so I stopped doing that and got to reading and experimenting.
I found some "all in one" solutions that put a fluent deamonset log collection thingy on each node, and then send the logs to a kubenetes hosted statefulset elastic search cluster.
I wanted to slightly adjust that instead to use AWS's Elasticsearch service ( I'm in love with managed services that do the complicated / tedious stuff for me ).
Reconfiguring the fluentd container I learned stuff. In fact I learned lots of stuff.
What
- You cannot send AWS firehose to an AWS Elasticsearch domain that's "in" a VPC.
- You can present an arbitrary service n a kubernetes cluster with a service that uses "[ExternalName]"
kind: Service apiVersion: v1 metadata: name: elasticsearch-logging namespace: kube-system spec: type: ExternalName externalName: vpc-XXX-sfdljsdjsglsj.co-loc-index.es.amazonaws.com
- You cannot use a service's Port specifications to fiddle around with ports.
- AWS Elastic search service listens on 80 and 443, not 9200.
- You can put a whole text file into a container by making it a value to a configmap key, like this:
First make the config map:
kubectl create configmap fluentd-configmap --from-file=fluent-conf
Then in the deployment|deamonset you make a volume and a mapping:
Under the container's mounts:
- name: config-vol mountPath: /etc/fluent
Under the deamonset / deployments volumes:
- name: config-vol configMap: name: fluentd-configmap items: - key: fluent-conf path: fluent.conf
- the image mentioned in the kops docs for doing elasticsearch mentions the image "k8s.gcr.io/fluentd-elasticsearch:1.22" which does not get the content header properly at least for ES 6.x ( https://github.com/kubernetes/kops/blob/master/addons/logging-elasticsearch/v1.7.0.yaml )
- The kubernetes project uses the fluentd agent version "1.2.4"
Next
- Redeploy AWS elastic search publicly with cognito and tls.
- Send kinesis stream to elasticsearch via firehose.