Introducing KPS Connectors - Introducing KPS Connectors

Last September, we made Karbon Platform Services Generally Available. Karbon Platform Services (KPS) is a Kubernetes-based multi-cloud PaaS that enables rapid development and deployment of microservices-based applications across any cloud, i.e., private, public, or edge.

KPS enables users to build data processing pipelines by deploying serverless functions written in Python, Javascript, and Golang. Data pipelines have enabled our customers to prototype and deploy real-time stream processing workloads without worrying about gluing the underlying infrastructure. Out of the box, KPS supports popular data sources such as:

  • MQTT
  • RTSP
  • GigEVision

Also native to KPS is support for popular data sinks such as:

  • Kafka
  • MQTT
  • AWS (Kinesis, S3, and SQS)
  • GCP (PubSub, CloudStorage, and CloudDatastore)
  • Azure (Blob storage)

One of the caveats of this feature has been that KPS supported only a limited number of data sources and data sinks out of the box. Each time a customer needed to wrangle data from a new protocol that was not yet supported by KPS, engineering effort was required to implement it as a first-class data source or data sink in the platform. Such an approach does not scale as we proceed to onboard new customers and workloads. We needed to decouple supporting new data sources and data sinks from the release cadence of our platform to ensure product velocity.

To address this issue, we are taking another step to open up the KPS platform to third-party developers by enabling them to extend the ingress and egress capabilities of the data pipelines. Starting with service domain v2.3.0, we will allow developers to build special applications called Connectors. Connectors are Kubernetes applications that run at a project scope and fulfill the following GRPC service contract. Connectors enable data pipelines to consume data from or emit data to a data source or data sink through the respective connectors.

A Connector supports multiple streams to the corresponding data sources or data sinks. Streams can map onto any logically partitioned set of records. For example, A stream in a Kafka connector maps onto a Kafka topic, Stream in a MySQL connector maps onto a MySQL table.

Streams are directional and can be either of two types

  • Ingress: Enable moving data from the data source into the KPS service domain
  • Egress: Enable moving data from the KPS service domain to the data sink

Connectors can be of three types based on the direction of streams they support:

  • Ingress: Connectors that support only Ingress streams
  • Egress: Connectors that support only Egress streams
  • Bidirectional: Connectors that can support both ingress and egress streams

You can write a connector in any programming language. However, we recommend using Go for building connectors. The Goroutines in Golang provide a natural mapping to handle streams in connectors. We have made a Golang connector template to enable developers to create a new connector without writing boilerplate code. This template allows connector developers to focus on business logic without worrying about the underlying infrastructure glue. 

I would like to take this opportunity to invite the community to help us build more open-source connectors for KPS. To create a new connector, you can read our blog post on building and deploying a connector using the Go SDK. If you have any questions, reach out to us via email or file a Github issue on the Golang connector template repository. We will also publish more blog posts and concrete implementations of some connectors our team has built using the template. Let us know when you create a connector, and we will add it to our library of available connectors.