The Industrial AI Vision Blueprint, also denoted as Automation Blueprint system provides a fully-integrated, hardware-independent and adaptable modular software stack that can be easily maintained and reconfigured for specific pick and place down-stream tasks. These down-stream tasks include but are not limited to visual quality inspection, conveyor picking and impurity detection. With respect to the name of our system, we offer an extensible blueprint that enables the integration of a vast range of different sensor modalities for analysing material streams on a conveyor belt using AI, allowing customers to retrieve any object of interest on a conveyor belt using their integrated robotics solution.
The major benefits of our system are listed as follows:
- simplified integration into legacy systems as we provide a standardized and complete software package for a closed-loop application
- high customizability as visual quality inspection criteria depend on the trained AI model and post-processing
- high maintainability as we build on top of the Siemens Industrial Edge ecosystem and the Siemens Industrial AI Suite
- high scalability as one AI pipeline can be rolled-out to multiple production lines simultaneously
Given the rising demand for retrieving valuable goods from a waste stream in the recycling industry in order to reprocess those products, an efficient, robust but also flexible system, allowing to sort out any valuable object of interest yields high business value. However, ranging from F&B all the way to the steel industry, we identified similar requirements in the process of visual quality inspection, leading to the design of our universally applicable Automation Blueprint system, which can be implemented across many different verticales without the need to start the entire system design & engineering process from scratch.
In this section, we would like to highlight the architecture, the core components and the workflow of our system as well as how it can be integrated using an existing robotics solution. Starting off with the architecture, Figure 1 illustrates the system architecture of our Automation Blueprint system showcasing the core components, a sensor setup using one color camera and the integrated robotics solution:
The overall system can be split into three major components (viewed from left to right):
- A cloud environment where the AI engineering is performed
- A Industrial Edge Device, denoted as IED in the figure above, used to run the AI model on the retrieved sensor information and send the information about the objects of interest to the PLC
- A Programmable Logic Controller (PLC) and a Human Machine Interface (HMI) used to control the interaction with the robot and the process parameters
While we do not define a standard on how the cloud environment must be set up, we build on top of the Siemens Industrial Edge and the Siemens Industrial AI Suite portfolio to define a standard workflow on how AI may be run on the shopfloor efficiently and how a reliable concept for IT/OT integration can be implemented. For the OT domain, we rely on the Siemens Simatic Toolchain like the Siemens Kinematics Integrator used to handle different robot morphologies and the Siemens Simatic Product Register designed for tracking objects on a conveyor belt.
Coming to the configuration of the IED, there is a minimum of three software components required to run our system:
- First, the Vision Connector App is responsible for the connectivity and parameterization of different camera models. When properly configured, the Vision Connector App (VCA) forwards the captured images to the AI runtime.
- Second, the AI runtime, also denoted as AI Inference Server is required to run a pre-built AI package directly on the IED, allowing to analyse the images forwarded by the VCA. The AI Inference Server can then be connected to the Databus App which acts as an MQTT Broker that allows you to forward the output of the pipeline run on the AI Inference Server to any OT Connector of your choice.
- Finally, to communicate with the PLC, there are three different OT Connectors available on Industrial Edge that allow you to establish a connection between the IED and the PLC using your preferred automation protocol. Those OT Connectors are the OPC UA Connector, the ProfiNet IO Connector and the Simatic S7+ Connector. For further information on the respective connectors and on how to configure them, please take a look at the linked pages.
Figure 2 illustrates all software components utilized in the IED and their interconnection.
Last but not least, the software stack running on the PLC has to fulfill the following requirements to be able to use our system for a closed-loop application:
- we expect a periodic 24V trigger signal from the PLC transmitted over wire to the camera connected to the IED
- the PLC needs to be able to match received object positions from our system to preceding trigger signals issued
- the PLC needs to be able to track given objects on a conveyor belt
- the PLC needs to implement a handling solution to control the robotics manipulator While our Automation Blueprint system was generally designed to work with different vendors by providing a standardized interface between the IED and the OT domain, we provide an example configuration using the Siemens Totally Integrated Automation Portal (TIA) on how to provide the necessary functional requirements for our system on the OT side. In more detail, three software components are required to operate on the PLC to fullfil the four requirements stated above:
- First, we implement the Edge Connector, a library that allows to interpret the data sent from the IED and parses the data into a format readable by the Siemens Simatic Product Register
- Second, the Siemens Simatic Product Register is responsible for periodically issueing the 24V trigger signal to the camera while matching incoming object positions from the IED and tracking them on the continously moving conveyor belt
- Finally, once the tracked objects enter the configured working area of the robot, the Siemens Kinematics Integrator takes over and inititates the pick and place operation for the respective objects.
Figure 3 shows all software components utilized and running on the PLC and their interconnection.
Given the core components described above, Figure 4 depicts the general workflow of our system at runtime:
For the purpose of this application example, we provide all the resources required to properly configure all necessary software components on the IED and flash a template TIA project into the PLC.
The sample configuration files for the latest versions of the VCA and the ProfiNet IO Connector (the OT Connector that we use in this application example) can be found here and here.
In order to instantiate and run an AI pipeline on the AI Inference Server, an AI pipeline package built using the AI SDK is required. To get started with the application example, we provide a link to a pre-built AI pipeline package here utilizing the latest version of our object detection pipeline used for the demo use case of our application example described here. Note that the AI pipeline utilizes a tool called CoordinateTransformer in it's post-processing step which is required to transform the 2D pixel coordinates computed by the AI model into 3D world coordinates for the handling solution to pick the objects later on. For more information on the CoordinateTransformer and on how to use the pre-built object detection pipeline, have a look at section Installation & Comissioning.
Finally, we provide a template TIA project along with the corresponding HMI application configuration which need to be flashed to a Simatic S7-1500 PLC and loaded into the connected HMI respectively.
Before jumping into the engineering process on how to install and comission our system respectively, we provide more insights on the OT components of our system.
Kinematics applications and robot solutions are essential for modern industry. They enable the precise control of object movements and the efficient operation of automated systems. These technologies are indispensable in manufacturing, assembly and packaging, as they significantly increase productivity and accuracy. With advanced software solutions such as the Siemens SIMATIC Kinematics Integrator (LSKI), the control of complex motion sequences is optimized and reliability is increased.
The LSKI is a specialized software solution designed to control kinematics (such as robots) on a SIMATIC S7-1500T PLC. This solution offers a variety of functions and tools that enable efficient programming and control of complex motion sequences.
LSKI_Core: The LSKI_Core function block forms the heart of the Kinematics Integrator and processes predefined command lists to control the kinematics or individual axes. It supports complex control structures such as conditional jumps, loops and access to variables and inputs, enabling flexible and customizable programming. All relevant program data, including program flow, variables, inputs/outputs and point tables, are stored in a single data block. This data block is accessible to other libraries in the TIA project, which facilitates integration and collaboration.
HMI Integration: Kinematics Integrator provides comprehensive HMI integration that allows robot programs to be created, edited and executed directly from the HMI, reducing the need to access a separate engineering system. The application can be enhanced with various HMI screens that provide functions such as start/stop/pause/resume program, single step operation, speed adjustments and user-defined buttons. These screens also allow the teaching of robot positions and the programming of motion commands for robots and single axes.
Technological Plug-ins: In addition to HMI integration, the LSKI can also be expanded with a variety of technological plug-ins. These include the pelletizing plug-in, which enables the control and optimization of pelletizing processes, and the product registration plug-in, which supports the precise recording and tracking of products during the production process. Another important plug-in is the G-code processing, which allows the integration and execution of G-code programs, which is particularly beneficial in CNC machining. These plug-ins make it possible to adapt the LSKI to specific requirements and increase efficiency and flexibility in various industrial applications.
This brief summary quickly shows the great potential of this library for controlling kinematic applications. Therefore, the LSKI is also of crucial importance for this project, as it offers the necessary flexibility and efficiency to successfully control and optimize complex kinematic applications.
For the sake of this application example, we will explain in detail how to install & comission our system in section Installation & Comissioning based on the following use case scenario:
Let's assume that we intend to sort dices on a moving conveyor belt given an even or odd eye count. To this end, one sensor modality, namely one mono/color camera connected to the IED is sufficient for the detection and classification of the dices. For the handling solution, we use a S7-1500 PLC which controls a 4 Degrees of Freedom (DoF) parallel kinematics, also denoted as delta picker using a two-jaw gripper for sorting the dices in the target compartments respectively.
In this section we provide a detailed walkthrough of our engineering process to set our system up for a new use case on a new production line. This procedure includes all steps starting from the selection of the required hardware to the deployment of a first working version of our product after performing the Configuration & Deployment of the AI pipeline. Note that if the system is already in place and the aim is to merely adapt the objects of interest to pick, only a subset of the following steps needs to be executed to achieve that goal (assuming no additional sensor modalities are required). Namely, steps 5 - 8 are required to retrain an AI model on the new data and steps 10 - 11 are required for the deployment and configuration of the system.
Initially, before starting any industrial vision application, the requirements of the inspection and the following pick & place application have to be defined. Depending on the use case and it's complexity, the requirements for the necessary hardware setup may change e.g. the used IPC, the sensor and lighting setup, the PLC, the kinematics and actor used, etc.. In this application example, we use the following major hardware components for solving the described use case:
- A Basler a2A2448-23gcPRO color camera with 5 MP resolution and GigE interface as vision sensor
- A Basler Lens C23-1620-5M-P
- An EVK IRIL lighting unit
- A modified SIMATIC IPC847E with built-in NVIDIA Quadro RTX 4000 GPU as IED (MLFB: 6AG4114-3RR15-0WY0)
- A S7-1500 Technology CPU as PLC (MLFB: 6ES7516-3UN00-0AB0)
- A SIMATIC HMI KTP900F Mobile (MLFB: 6AV2125-2JB23-0AX0)
- A 3D printed two-jaw gripper as actor
- An Autonox kinematic of type DELTA RL4-600-1kg (MLFB: A_00082-04)
- Three SIMOTICS S-1FK2 translatory servo motors (MLFB: 1FK2104-5AF10-2MA0) for robot movement in X, Y and Z direction
- One SIMOTICS S-1FK2 rotary servo motor (MLFB: 1FK2103-4AG10-2MA0 ) for rotary movement of the actor
- One SIMOTICS S-1FK2 translatory servo motor (MLFB: 1FK2104-4AF00-0MA0) for the conveyor belt
- Five SINAMICS S210 drives (MLFB: 6SL3210-5HE10-8UF0)
Complementary to the necessary hardware selection, selecting software components, that fulfill the requirements of the given use case is fundamental. While most software components in our system are standardized and only allow for flexible configuration, there are a few components that can be swapped depending on application requirements. For instance, depending on the performance requirements of the AI pipeline, there is the option to select between the AI Inference Server and the AIIS GPU-accelerated where the latter allows to run the AI model on a built-in NVIDIA GPU. Furthermore, users can also choose from several OT connectors e.g. ProfiNet IO, OPC UA or S7, making our system more versatile in terms of OT integration capabilities. In this application example, we use the following software stack:
- Vision Connector v1.0.1 for connecting the camera to the IED
- AI Inference Server GPU-accelerated v2.1.0 running the AI pipeline package
- Databus v3.0.1 for the communication between the AIIS and the OT connector
- ProfiNet IO Connector v2.1.0-4 for the OT communication
- ProductFlowManager v2.0.0 for tracking the objects on the moving conveyor belt
- SIMATIC Kinematics Integrator v2.1.0 for controlling the handling solution
The placement of the utilized sensors in the plant is crucial to the visual inspection task. While there are guidelines for certain sensor types e.g. for mono/color cameras, the Field of View (FoV) and the image quality are heavily impacted by the configuration and type of lense as well as the mounting offset of the camera w.r.t. the inspection surface, we will be discussing two aspects, more specific to our system:
sensor positioning: Supplementary to the position of the sensors in Z-direction (we perform inspections using a birds-eye view), the offset of the sensors in X-direction to the robot is equally as important. Note that X in positive direction is defined by the direction of conveyor movement. Due to the distributed architecture of our system, we have two limiting factors when it comes to reliability and performance:
- the offset in X-direction between the camera and the delta picker
- the speed of the conveyor belt
The reason for this limitation is that in order to guarantee that our system is capable of picking the objects of interest from the conveyor belt, the distance between the mounting point of the camera and the working area of the robot must not be passed faster than the inspection can be processed by the system. To this end, the maximum speed of the conveyor supported by our system is indirectly proportional to the inspection processing time of our system.
multi-sensor setup: For utilizing multiple sensor modalities running our system, all sensors must be mounted in the same height, with each sensor only having an offset in X-direction (imagine mounting the sensors on a virtual line). For our system to work, we perform a calibration procedure to align the individual sensor streams for further processing where one sensor - usually the mono/color camera - is selected as reference sensor.
Depending on the intended sensor modality and sensor model, connectivity to the IED as well as to Industrial Edge can vary significantly. While any sensor using standard TCP/IP communication and RJ45 ethernet connectors can be plugged into the IED, the Vision Connector App (VCA) running on Industrial Edge currently only supports selected industrial cameras and RTSP cameras. In our application example, we are using a Basler a2A2448-23gcPRO color camera with 5 MP resolution and GigE interface for this purpose.
In order to acquire high-quality images for the visual quality inspection, the VCA enables the configuration and parameterization of the connected cameras. This parameterization includes, but is not limited to the exposure time, trigger source and region of interest on the acquired image. A sample configuration for our application example is given here.
Building on a professional vision setup and data collection workflow, we can start collecting the required data for training and evaluating our target AI model. In order to achieve maximum performance and reliability, a few points must be observed when training an AI model for a specific use case:
sampling representative data: To train a reliable AI model for a specific down-stream task, sampling sufficient data to capture the product variance of a material stream that should be inspected at runtime is key. Given our application example, this means that we need to have a sufficient number of images capturing one or multiple instances of dices on an image where each class - the eye count on the dices in this case - should evenly be present in the overall dataset.
use case complexity: Based on the complexity of the visual quality inspection w.r.t. the inspection critera e.g. defects on a product or product variance e.g. multiple products should be inspected using the same AI model, the training dataset needs to be composed of a sufficiently large number of samples for the model to converge. Given our application example, having a training dataset of approximately 100 samples should be enough for a solid AI model.
sim2real gap: To guarantee that the performance of the trained AI model at runtime conforms to it's performance during training and local evaluation, the data collected for training the AI model should be gathered using the same or at least a similar vision setup than during runtime. Utilizing a different vision setup at runtime e.g. different lighting, different lense, different perspective, etc., may degrade the performance of the AI model rapidly.
In order to annotate the recently collected data for training our AI model, we rely on the Computer Vision Annotation Tool, also denoted as CVAT for computer vision applications. CVAT supports all standard computer vision applications, ranging from classification to object detection and segmentation. It also allows you to export your annotations in all standard formats like COCO, Pascal VOC or YOLO. CVAT further allows you to perform auto-labelling, providing support in the annotation process by running a pre-trained AI model in the background.
In this application example, we utilize an object detection algorithm as AI model for solving the given problem statement. For this purpose, we need to create an object detection dataset featuring category information (class ids) and localization information (bounding boxes) for each object on each image in the dataset. The annotations are preferrably stored in COCO format, simplifying the training process of our AI model in the next step as we expect the annotations to be in COCO format for this application example.
Once a sufficient amount of representative high-quality data has been collected and annotated, an AI model can be trained to execute the desired down-stream task. In our application example, this down-stream task represents the detection of dices and their eye counts on the perceived camera images. As a detailed description on the usage of our AI pipeline repository and the training procedure to derieve the final AI model would go beyond the scope of this application example, we provide a pre-built AI pipeline package here instead, ready to be directly used in the production environment.
There are several functional requirements that contribute to the selection of an appropriate kinematics and gripper given the specific use case. Primarily, there are four aspects to consider:
- throughput of the handling solution
- speed of the material flow
- objects of interest must be grippable as they may not be gripper agnostic
- weight of the objects to pick
For the purpose of this application example, we use a 3D printed two-jaw gripper for our showcase. This allows us to demonstrate our system on simple use cases such as picking dices based on an even or odd eye count while not being required to implement a compressor for controlling a more flexible pneumatic actor.
At runtime, our system relies on two components to seamlessly work together:
- the IED for analysing the sensor data of the material streams on the conveyor belt (IT solution)
- the PLC used for controlling the robot and initiating the pick & place process (OT solution)
To this end, we need to define an interface that allows us to efficiently transmit the analyzed data from the IED back to the PLC. Given our application example, we use ProfiNet as communication protocol while the IED is configured as iDevice and the PLC is configured as controller. Based on this configuration, we use the following format to transmit the necessary data as shown in Figure 3.
The payload is represented as byte array implicitly encoding each object's class and position detected in the image. Given the maximum payload per ProfiNet frame to transmit, we limit the amount of detectable objects per image to a total of 40, resulting in a payload of 1154 bytes per ProfiNet frame to transmit.
In order to pick the inspected objects on the moving conveyor given the decentralized architecture of our system, a common world coordinate system (WCS) must be established. For this purpose, all sensors and actuators that are involved in the analysis and handling of the objects must be referenced with respect to a common world origin (reference point). For the sake of simplicity, our system only requires you to specify the location of the kinematics base in the WCS while a calibration procedure for the reference sensor connected to the IED (usually a 2D camera) must be performed. Either of the necessary configurations will be discussed below:
Referencing the robot: Based on the technology stack used for controlling the plant, the definition of the WCS may vary significantly. Given our application example, we use the TIA portal to provide a comprehensive OT solution for controlling the plant. In TIA, robots e.g. a 4 DoF parallel kinematic as we use one are represented as technology objects for which a WCS can be easily configured using the provided UI.
Referencing the camera: To be able to calculate the corresponding 3D world coordinates based on the observed camera image at runtime, a calibration procedure needs to be done to define the required intrinsic and extrinsic parameters for the used vision setup. This step is crucial for the functionality of the application as it puts the 2D pixel coordinates on the camera images computed by the AI model in relation to the WCS. It is important to note that this step must initially be performed to fix the position of the camera in the WCS, thus, moving the camera afterwards will result in a malfunction of the system. The calibration procedure requires two steps:
- A collection of images using a checkerboard viewed from different angles utilizing the installed vision setup
- A set of measurements of the 2D to 3D point correspondences (a set of four point pairs where each measured 3D point is associated with its 2D counterpart)
For the sake of simplicity, we ship the required configuration file for our demonstrator already along with the pre-built AI pipeline package.
After the general functionality of the application has been explained in the previous chapters, the implemented functionalities will now be examined in more detail, whereby the basic structure can be seen in the following figure. It is important to know that several libraries are combined within the application, which are not described in detail here. To get a more detailed insight into the individual library modules, the documentation is linked in each case. In addition to general explanations, these documents also describe the necessary steps for implementation and commissioning. These are assumed within this blueprint summary.
The figure shows the general structure of the software components. The system is structured in such a way that the data from the AI application is transmitted via Profinet RT communication and then processed in the LPickLnVisionConnectorEdge function block. This function block acts as interface between Industrial Edge and the ProductFlowManager, with it's main purpose of converting the recognised object data into a form that can be processed by the ProductFlowManager function block. The object data consists of the cartesian object coordinates including the orientation of the objects, as well as additional properties such as the type of the recognised object. As the entire AI process requires an undefined amount of time for processing, attention must be paid to the synchronisation of the two blocks. This is realised in this application by controlling the camera trigger using a cam on the conveyor belt, whereby this option is already included in the product register library. This synchronization with the cam works in such a way that the ProductFlowManager cyclically triggers the camera system after a certain distance traveled by the conveyor belt and saves the exact position. Once the AI process has been successfully completed, the recognised objects can simply be matched with the saved conveyor belt positions. Note that the trigger interval must be configured according to the FoV of the camera in order to achieve a consistent analysis of the material stream on the conveyor belt. For this purpose, ProductFlowManager allows you to specify the length of the FoV of the camera in mm in its configuration. As the basic functionalities for this are implemented in the product register library, we will not go into more detail here. The recognised and made objects are stored within the product register library in the product database. The object data is stored within this database, whereby the constantly changing conveyor belt movements are also taken into account here. The following firue shows the detailed structure of the variables within the Datablock which are used for the interaction between the "VisionConnector_Edge" and the "ProductFlowManager".
Apart from processing the object data from the camera connector, the product register also includes coordination of actuators that are used to pick objects from the conveyor belt. Within this application, a delta picker is used for this task. However, several delta picker instances can also be coordinated within the library. The data to be exchanged between the product register and the kinematics controller are once again the cartesian object coordinates including the orientation of the object, whereby the individual objects are transferred here. The product register transfers the object data in relation to an object coordinate system that moves with the conveyor belt. These data points are transferred via the "LPickLn_PickersExchange" interface, whereby these data points are mapped in network 6 in the "LSKI_GlobalProjectData" data block. In addition, some conditions are set within this network, which are described in the next section, to ensure the correct interaction with the kinematics controller. The following illustrations show the conditions and variables used, which are exchanged here.
After mapping the data points into the kinematics structure, the conditions that are required to implement a step chain in the robot program are now described in more detail. The function of these conditions is that they can be set from the PLC program as well as in the robot step chain in the HMI. This makes it easy to change the interaction between the robot and the product register. The following figure shows the conditions that have been defined for this application. It is important to note that some conditions are already available for the system functions (INITIALIZE, START, STOP, ...). The exact control in the course of this application will be considered in more detail afterwards.
The following illustration shows the robot step chain in terms of a flow chart. After initialising the application via the HMI, all axes are activated and the first conditions are set. As soon as the start button on the HMI is pressed, the conveyor belt starts moving and triggers the AI system at defined intervals. As soon as the kinematics then receive an order from the product register, a condition is set again and the path for the gripping movement is started. During this path, the set positions of the product register are automatically adopted whereby the kinematic system automatically synchronises to the conveyor belt. The information from the AI system is then used to decide to which compartment the robot should move. After the object has been placed, the robot returns to its starting position and waits for the next object to be processed. This process is repeatedly executed until the step chain is stopped using the Stop button.
The figure illustrated below shows the overall implementation of the robot step chain using the LSKI as well as its alignment with the other components of the overall system.
Given the core structure of the robot step chain shown above, the following four tracks are implemented to handle the requirements of our demo pick and place use case:
- The track "pick_path" handles the movement of the robot to grasp a target object on the conveyor belt
- The track "home_path" moves the robot to its defined home position
- The track "pick-object-eve" moves the robot above the compartment specified for placing objects classified as even e.g. an even eye count on the dice for this use case
- The track "pick-object-odd" moves the robot above the compartment specified for placing objects classified as odd e.g. an odd eye count on the dice for this use case
In order to handle the requirements for the pick and place application of our demo use case, the following points in the LSKI's point table are defined and used by the LSKI in the programmed tracks:
- The point "home" defines the coordinates of the home position of the robot
- The point "pickup" is dynamically computed and represents the location of the target object on the conveyor belt including some safety offset in Z-direction
- The point "pickuppick" is dynamically computed and represents the location of the target object on the conveyor belt
- The point "dropoff-even" defines the position of the compartment for placing objects classified as even including some safety offset
- The point "dropoff-odd" defines the position of the compartment for placing objects classified as odd including some safety offset
Finally, the following conditions and variables are specified by the LSKI to ensure correct interaction with the other components of the system:
- The condition "Gripper" defines whether the gripper shall be closed or opened
- The condition "kinematics-busy" defines the processing state of the robot i.e. if the robot is currently processing a target object or waiting for another object to pick
- The condition "lpickln-validobjectpos" acts as signal between the ProductFlowManager and LSKI and determines if a new object is ready to be picked
- The condition "lpickln-objectpicked" acts as feedback signal from LSKI to ProductFlowManager and determines if the new object has been picked
- The condition "vca-pfm-enable" controls the enablement of the LPickLnVisionConnectorEdge and ProductFlowManager function blocks
- The condition "enable-hsi-lighting" controls the lighting unit integrated in the machine vision solution
- The condition "object-even" is defined by the classification result of the AI system and is set if the target object is classified as even
- The variable "z-pick-height" defines the Z-position of the objects to be picked (based on our configuration, the default value is -2.0)
Given the pre-built AI pipeline package here, we can upload the package to the AI Inference Server (AIIS) GPU-accelerated to run the package in a production environment. In order to integrate the AIIS with the other software components running on the IED as shown in Figure 2, the data source, the data sink as well as the mapping of the input and output channels must be defined. In our application example, we use the VCA utilizing ZeroMQ (ZMQ) as data source where the input of our AI pipeline is mapped to the ZMQ topic of the configured camera. The name of the topic can be found in the VCA UI. For the data sink, we use the Databus application, which acts as an MQTT-Broker, allowing to publish the results of our AI pipeline to any subscriber in the network e.g. the ProfiNet IO Connector used to communicate with the PLC. To this end, the output of our AI pipeline is mapped to the MQTT topic of the ProfiNet IO Connector specified for writing data points e.g. ie/d/j/simatic/v1/pnhs1/dp/w.
The Industrial AI Vision Blueprint project was made possible with the help of many contributors (alphabetical order). For general questions, please reach out to:
- Ralf Gerdhenrichs ([email protected])
- Lukas Gerhold ([email protected])
- Daniel Schall ([email protected])
For technical questions:
- Daniel Dräxler ([email protected])
- Daniel Scheuchenstuhl ([email protected])
Thank you for your interest in contributing. Anybody is free to report bugs, unclear documentation, and other problems regarding this repository in the Issues section. Additionally everybody is free to propose any changes to this repository using Pull Requests.
If you haven't previously signed the Siemens Contributor License Agreement (CLA), the system will automatically prompt you to do so when you submit your Pull Request. This can be conveniently done through the CLA Assistant's online platform. Once the CLA is signed, your Pull Request will automatically be cleared and made ready for merging if all other test stages succeed.
Please read the Legal information.