When machines come to an unplanned standstill, they burn up a lot of money. Insurance, condition monitoring, MES, BDE, maintenance or costly training: These are the usual painkillers against it. Unfortunately, they only work in the short term. The real fault is not in the hardware or software. The foundations for expensive failures are laid in the design phase. For this reason alone, a change of perspective is necessary
You don't have to do long calculations to get to the heart of the costs of an unplanned downtime: An average machine in an average production is running 40 hours a week, churning out 15,000 units. Each unit may generate a gross profit of 8 euros. This means that the company loses 24,000 euros in profit per day of downtime. After three days, it is already 72,000 euros. And this sum does not even include the repair costs or wage costs for forced idle employees.
Unplanned downtimes cannot be prevented. Especially not on increasingly complex machines. But machine complexity is not the reason why troubleshooting is becoming increasingly lengthy and expensive. The cause lies in a misunderstanding of what constitutes a machine in the first place.
On average, five to ten per cent of a project volume is spent on programming a PLC control. Moreover, programmers come into play much later in the course of the project than mechanical engineers or mechanics. As soon as the mechanical and electrical engineering conforms to the specifications, a machine is considered "finished". Programming is often seen as a necessary add-on and the final touch for commissioning.
Accordingly, no one looks too closely at the programmes - they are written for experts and follow the limited ideas, experiences and test scenarios of the programmer himself. He is, after all, the expert.
Given the infinite variety of possible errors, this inevitably leads to limited checking before commissioning. The actual main task - controlling the machine - can only be assessed at start-up. There is not much time for programming, testing and operator training. Therefore, programmers implement (self-)imposed standards - to speed up the process. Copy and paste and recycling of established codes are important quick-fix tools. Possible undetected errors and problems are copied along; innovations are difficult. And the machine remains a black box that provides few answers in the event of a shutdown.
It is in the complexity of the system itself that a program quickly reaches the limits of its representation: each signal doubles the mapped state space that it should describe and take into account. With 2 to the power of 16 signals, there are already 65,365 states that would all have to be anticipated. And so on. The "state explosion problem", prominent in scientific discourse, describes the resulting software limitation. It does not cause theoretical consequences, but rather costly ones:
As soon as the machine enters a non-programmed state, it comes to a standstill and a lengthy search for the cause. There may be no solution because the programmer could neither foresee this state nor programme it manually.
For this reason alone, it is clear that the view of the machine must change from the separation between hardware and software to a new understanding of the system. Only with the conviction that a machine can potentially assume infinite states can standards be established that address this complexity.
Such standards already exist. However, they do not necessarily
fulfil all the requirements for smooth automation:
- Generally valid framework conditions for all basic functions - a standard that is also adhered to by all.
- Verifiable specifications as in mechanics and electrical engineering
- Maximum flexibility for adaptation to different processes -
- mapping of each process
- Automatic translation to the PLC by an algorithm
- Comprehensible model that maps every state and every bit in a controlled manner
- Program that monitors each state and bit for correct or incorrect
- Fast and comprehensible display on the machine
- Immediately visible deviations that successfully reduce downtime
These requirements result in a completely new understanding of machine programming: it must break away from individual programming and not understand the machine as an accumulation of components with specific tasks in a specific scenario.
A machine is a sequence of states that can be described unambiguously via on or off - and thus via bits. This basic fact applies to all machines and every sequence.
Since only the state is described, the software can immediately identify the cause of a standstill: A state in the sequence was not implemented correctly. All that remains is to fix the physical source of this error.
With the SELMO standard, the manufacturer and the machine builder agree on a common digital language. Because the functional framework is in place, the process that is to be automated moves to the centre.
The process is modelled simply and with bit precision. Without any PLC expertise at all, the programme logic is described with the model; the programming specification is taken care of. Logic and function tests can be carried out long before commissioning. By modelling the machine instead of designing it, all aspects from mechanics and electrical engineering to drivers and functions are virtually self-explanatory. A quickly created list of open and known assemblies creates a common implementation picture. In the modelling phase, it is completely irrelevant which PLC or which programming language is used. PLC specifications have not been able to prevent downtimes to date - and the SELMO programme is created automatically instead of being written.
When the electrical engineering is completed, the PLC commissioning starts: the model is loaded into the PLC and the wiring is checked. If the wiring is OK, the safety is checked, the drives are configured, and the system can be started.
Because all data points have been generated automatically and error-free and every bit is monitored by the programme, the HMI displays any deviation in real-time. The SELMO machine is also functionally stable and easy to operate. Throughout its life cycle, errors in the hardware are quickly detectable. The software does not wear out and the model remains precisely defined. The result is a truly smart machine that runs better and makes things easier for everyone.
Process thinking with SELMO works not only for new machines but also for existing ones. After all, neither specific software nor hardware play a role in the modelling. "Every Bit Under Control" is the working principle of sustainable digital production, which can be realised many times faster than before.
- Most of the condition monitoring is already implemented in the PLC-
- Simple and uniform programme structure based on algorithms-
- Safe operation without the need for expert knowledge-
- Flexible modelling tool, also for future innovations-
- Hardware and manufacturer-independent-
- Complete digital visibility of all machine conditions
The SELMO Standard and SELMOstudio are comprehensive tools for fewer and shorter machine downtimes. They don't just treat the symptoms but eliminate the problem in the design phase and thus at its root. This change of perspective reduces costs, minimises risks and ensures future competitiveness. SELMO solves the problem at its source and creates stable automation down to the last bit - smart, simple, secure.