Skip to main content Skip to footer

Event-Driven Data for your Application

The modern enterprise is event-driven and real-time enabled. The foundation for this is the ability to connect to any data source and capture and act on changes. New data, updated data or deleted data. We have you covered for all of it. 

  • IoT / Machine Data. Process streams of IoT data and take actions based on smart conditions. 
  • Report-by-Exception. Only trigger actions on streaming data if the data value changes.
  • Webhooks gives you the ability to connect to SaaS services that push events such as payments, orders or updated records. 
  • Pub/Sub like WebSockets and MQTT let you subscribe to topics important to you. 
  • Change Data Capture (CDC) support in databases lets you keep your data in sync.
  • File Watcher detects changes to files that can trigger events.
  • Anomaly Detection. Apply fixed rules, custom algorithms or ML/AI models on streaming data to detect anomalies on IoT data, Video/Audio, user behaviour or transactions.

Users expect real-time. Crosser makes it happen.

As consumers we expect our text messages to be delivered instantly. We want the order confirmation or the online purchase to come immediately. We take for granted that the social media posts are uploaded straight away. If it doesn't happen, we start to wonder if anything has gone wrong.

This real-time expectation has now reached the business world as well and it creates new challenges for data integration. Scheduled batch integrations are too slow and just moving data from one place to another is not enough. 

Crosser is a stream analytics and event-processing solution that uses in-memory processing to deliver data in milliseconds. All you need to be able to upgrade your application and give your users a modern experience. 

Crosser vs Kafka

Simple vs Complex

It is common for developers and data teams to default to Kafka for event-processing which sometimes makes a lot of sense. Crosser can either complement Kafka or replace Kafka, depending on the specific requirements. Typically Kafka is overkill for 99% of businesses and although Kafka is open source it comes with a lot of complexity and cost.

Small vs Huge

First the complexity is very high in designing and implementing use cases from a data engineering perspective requires a significant amount of time/cost. Second, the technical set-up is complicated and requires significant DevOps resources/cost. Lastly, the minimum requirements for running Kafka includes 130 CPU cores and 158 GB RAM Memory. That is a huge amount of infrastructure cost. Crosser requires a small fraction of that.

Read more about Crosser vs Kafka →

Close