Skip to main content Skip to footer


How to Simplify SQL integration for Industrial Data

Streaming SQL use cases for on-premise and cloud.

The importance of SQL and SQL databases is indisputable. When designing data flows for Industrial use cases or creating data pipelines for cloud and enterprise integrations, you almost always need to consider either retrieving, or inserting data into one or more SQL databases.

The universal structure and low learning curve contributes to the widespread use of this open source language. SQL is one of the corner pillars in the digitized world.

The Crosser platform is taking the SQL simplicity to the next level. By offering a set of pre-built analytics modules and connectors for integration to SQL databases and enabling users to configure various queries for their use cases.

Crosser SQL Connectors Illustration

Working with SQL databases in the Crosser Platform

Within the Crosser Module Library you can find pre-built connectors to all major databases. These simplified modules require no previous knowledge to be used for SQL integration and queries. And can easily be combined with advanced analytics modules, machine learning algorithms and other system connectors to create intelligent data flows and pipelines.

Crosser supports databases both on premise and cloud. For more advanced use cases Crosser also has modules that can support you to execute any SQL statement that you would like to run against your database. 

The key thing is that all these modules, independent of which database they operate against, provide the same interface. Both when configuring these modules but also when it comes to the data they expect and also when you read and write

Select and Insert SQL Modules

The Select and Insert modules are the most common SQL modules, easy to use without requiring SQL knowledge:

Select modules - read data from databases. Select a table you want to use, then specify the columns where you want to get the data from directly in the module settings.

If you want to limit the data you receive, you can also add filters to specify the data you are retrieving. When adding filters, you can use data from your flow in those filter conditions, creating dynamic queries and adapting it to the data in your flow.

Insert modules - are used to write data rows in the database. You can either insert one single row or multiple rows. Input data is needed, you choose the data you want to use and you can even add filters.

Executer modules

Executer modules - modules where you enter any SQL statement. Combine data from multiple tables, taking actions like delete data, add columns and create new tables. You can also trigger stored procedures that you might have in your database. Basically anything you can do with SQL you can do using these Executer modules. 

You can also use data from your flows as parameters in those statements to make them dynamic and adapted to whatever data you have in your flow.

Most common SQL Use Cases

Crosser SQL Use Case

Stream Data to the database

Single measurements - Here we receive each measurement as an individual message, for example through an MQTT input. The data could come from multiple sensors but each message is a single measurement.

Then an Insert Module is used to store each measurement as a new row in the database. Typically some kind of data transformation is needed to adapt to the format that the SQL module expects.

Multiple measurements - In this case multiple measurements are provided in each message, for example when fetching multiple registers from a PLC module. Each message will then have an array of measurements. Then, you have two choices depending on how you have laid out your database tables:

  1. Store One Measurement per row - If you use the same type of table as above, where you store one measurement per row, then you can take the array as it is, convert it into the format that the database expects and then send it to the database module. It will then create one row per element in the array. 
  2. Store Multiple Measurements per row - If you have set up your database table to store multiple measurements in each row, then you can take the array of measurements you received from the PLC and convert it into a single object where each measurement is mapped against the corresponding column in the database. Then you will feed that information into an Insert Module and create a single row with all the measurements.

In the examples above, different types of databases can be used, as they all work in the same way and are replaceable.

Crosser Read from Database

Read Data from the database

Example 1 - You can use database data as an input source, this is then the primary source of data. You can add filter conditions, for example run the flow every hour and request data that has been added in the last hour. You can then use the data with the type of processing you wish, to send notifications, apply triggers, analyze etc.

Example 2 - You can use the database to extend the data you get from another incoming data source. As an example, you could get data through an MQTT input to get sensor measurements, then you use a database Select module to make a query against the database to fetch additional metadata associated with the specific sensor. Here you need to use filters based on the data coming from the sensors in order to select the right data. You could for example look up calibration values for a specific sensor by using the name in the database filter. These calibration values can then be applied to the raw sensor data to create more useful data.

Crosser Read from Database

Execute SQL statements

The Executer Modules can be used to run any SQL statement. Here are some common examples:

  1. Automate repetitive tasks that you want to happen in your database like clean up the database on a regular basis (remove tables, values etc).Then you can add a time trigger that will perform the task at the desired time interval.
  2. Execute both queries and commands. You can use your HTTP server to set up a HTTP endpoint where you can receive external requests. Based on the request you make a query against the database and then you use the data to build a response that will be sent back to the HTTP. As a result you have wrapped a database behind a REST API.

More advanced use cases to process data can be built using other modules available in the Crosser Library, but the starting point, the database connectivity, is very easy to set up.

You can learn more about the Crosser Module Library here → 

Watch the webinar video: How to Simplify SQL integration for Industrial Data or Schedule a Demo with a Crosser Expert to learn more.


About the author

Goran Appelquist (Ph.D) | CTO

Göran has 20 years experience in leading technology teams. He’s the lead architect of our end-to-end solution and is extremely focused in securing the lowest possible Total Cost of Ownership for our customers.

"Hidden Lifecycle (employee) cost can account for 5-10 times the purchase price of software. Our goal is to offer a solution that automates and removes most of the tasks that is costly over the lifecycle.

My career started in the academic world where I got a PhD in physics by researching large scale data acquisition systems for physics experiments, such as the LHC at CERN. After leaving academia I have been working in several tech startups in different management positions over the last 20 years.

In most of these positions I have stood with one foot in the R&D team and another in the product/business teams. My passion is learning new technologies, use it to develop innovative products and explain the solutions to end users, technical or non-technical."