Deep Network GmbH Developers' Blog

How to Setup an ELK Stack and Filebeat on Kubernetes

The logs are one of the most critical parts of every infrastructure for monitoring and debugging purposes. In general, there are different types of logs in every infrastructure including third-party, system, application specific logs which have different log formats like json, syslog, text, etc. It is not trivial to handle all these different log formats. But the main challenge is not only the variety of formats but also lots of log producers, especially in cluster environments. It is not possible to perform collection and processing manually. So, to be able to overcome these challenges, you have to utilize the well-known, dedicated tools and frameworks such as ELK Stack, Filebeat.

Kubelet API

In this post, we’ll describe how a pod or a user can access the kubelet API available on each node of a kubernetes cluster to get information about pods (and more) on that node. We first discuss which ports are available for this purpose, then list the available endpoints (resources) of the kubelet API. Lastly we discuss how to query the secure-port of and which authentication & authorization mechanisms are used.

Understanding Networking Options in Azure AKS-Engine [Part 1]

AKS Engine provides convenient tooling to quickly bootstrap Kubernetes clusters on Azure. By leveraging ARM (Azure Resource Manager), AKS Engine helps you create, destroy and maintain clusters provisioned with basic IaaS resources in Azure. AKS Engine is also the library used by AKS for performing these operations to provide managed service implementations.

Event Hub Consumer Throughput Analysis

In this post we are going to analyze various strategies to increase the throughput in a sample EventHub consumer application. We will try out various scenarios, starting with a baseline to compare results against. During the tests, the Prometheus metric scrape interval is set to 10 seconds. Also, the Grafana dashboards display the latest 15 minutes for each individual task with 10 seconds refresh interval. Each test scenario based on the customization made to the single partition event hub consumer code snippet. In order to see the effects of our improvements, we dedicated event hub sender to send events only to single partition. So, we can say there is no event hub partition parallelism during our tests. In addition to these, we have used Event Processor Host which is an agent for .NET consumers that manages partition access and per partition offset for consumers.

Securing Access to SQL Server with Managed Identities and aad-pod-binding

A common challenge when building cloud applications is how to manage the credentials in your code for authenticating to cloud services. In this blog post, I will try to explain how we managed to transform our application to use managed identities while connecting SQL database instance in pod level by using aad-pod-binding.