The Fastest Way to Create Live Data Products

You want to enhance your products based on live data? You can use Quick - a battle-tested open source data architecture. The Quick Engineering Team can offer flexible support and bring in 20+ years of experience.

Meet the Quick Engineering Team

  • Kafka Summit London 2022

    A Kafka-based Platform to Process Medical Prescriptions of Germany’s Health Insurance System

    Together with spectrumK, a service company for Germany's health insurers, we have built a platform on top of Apache Kafka and Kafka Streams that can process and approve prescriptions at large scale.

    More about the Session

Quick - the Foundation for your Stream Data Applications

Quick receives and orchestrates your real-time data streams. It runs and maintains your applications on your data streams. Quick exposes production-ready APIs and connects your data streams to your products and devices.

You can use a tested and future-proof architecture

Quick is Open Source and you can start right now

Quick is based on Apache Kafka and runs on Kubernetes

Design your API backend, query your stream data with GraphQL, and use the results.
type Query {
  findProduct(productId: Long): 
     String @topic(name: "product-topic", keyArgument: "productId")
query {
   findProduct(productId: 123)
   "data": {

An experienced Engineering Team - flexible at your side

Alexander Albrecht, Managing Partner

Christoph Böhm, Managing Partner

You use Quick as the foundation for your Live Data Products. We are happy to support your development. We discuss best practices, give fresh momentum and draw on experience from many projects.

Our team of 15 best-in-class engineers creates Live Data Products for diverse industries, such as life science, healthcare, insurance, mechanical engineering, e-commerce and logistics.

Our tech stack includes Apache Kafka, Kafka Streams, Kafka Connect, Kubernetes, Kubeflow, Elasticsearch, (No)SQL, GraphQL, Java, Python and cloud platforms (AWS, GCP, MS Azure).

Schedule first meeting

Your Path to Your Live Data Products


Download Quick from GitHub

Quick incorporates many years of experience and countless engineering hours.
You can use a battle-tested foundation instead of starting from scratch.


Share your challenge

Book a non-binding appointment.
We will demonstrate Quick, answer your questions and outline a path to your solution.


Flexibly at your side

We are there if you need help. We can partner to develop the solution or simply give you food for thought. You can flexibly benefit from our 20+ years of experience.

Schedule first meeting
Quick on GitHub

Use Cases for Quick

Here are a few cases you can use Quick for. Quick receives and orchestrates your real-time data streams. It runs and maintains your applications on your data streams.


Digital Twin

Predictive Maintenance

Fraud Detection

Dynamic Pricing

Live Online Analytics

Track ´n Trace


Let us consider your ideas

We are happy to discuss your challenges and outline a path to your solution with Quick.

We will have a 30min video call.
You will talk to Alexander and Christoph.
We will discuss your project.

Schedule first meeting

The d9p Team shares know-how

  • Optimizing Kafka Streams Apps on Kubernetes by Splitting Topologies

    Understanding Kafka Streams processor topologies can be essential to reduce costs and improve complex applications’ manageability ...

    Read article on Medium

  • Exploring Data Pipelines in Apache Kafka with Streams Explorer

    When working with large-scale streaming data, it is crucial to monitor your pipelines and to explore their individual parts.

    Read article on Medium

  • Scaling Requests to Queryable Kafka Topics with nginx

    In this blog post, we implement custom routing logic in nginx to efficiently scale requests to queryable Kafka topics.

    Read article on Medium

  • Solving my weird Kafka Rebalancing Problems

    Imagine you are working on your Kafka Streams application. You deploy it to Kubernetes, wait a few hours, and suddenly ... What’s happening?

    Read article on Medium

  • Continuous NLP Pipelines with Python, Java, and Apache Kafka

    Advancements in machine learning, data analytics, and IoT, and the business strategic shift towards real-time data-driven decision making ...

    Read article on Medium

Blog on Medium