We have already touched on this subject in our last blog. In this entry, we would like to discuss it in more detail.
If you add up our collective IT experience, you will probably get a result of several hundred years, maybe even a whole millennium. During all this time, in all our discussions, one topic was always sure to pop up: Data – and not because data are a classic!
No matter where you go, no matter who you talk to, everyone keeps saying that the data is bad, and it is immaterial whether data is/was provided via a data transfer or a data source for analyses or marketing activities. When you have listened to this complaint for so many years, two questions are begging to be asked:
– What exactly constitutes good data anyway?
– How can I ensure that data does not turn “bad“?
What exactly constitutes good data?
Although it is tempting, let us not start a philosophical discussion about “good” and “bad.” Data are not a four-course meal that is supposed to taste good, nor are they a manufactured object that can be validated under ISO parameters for their production quality. Data are data, nothing more and nothing less. And in my opinion, they are neither good nor bad.
Data are always based in a context or ecosystem. Both are crucial for one of the two criteria. Data are information, and information needs to be interpreted. If the interpretation is conclusive and the data are useful for the intended purpose, then the data are good, and vice versa. And since there is an “AND“ relationship between these two conditions, and the connection can only be logically valid if both are true, data are more often bad than good.
But let us take a closer look at these individual requirements.
A single data record does not tell you anything; it only makes sense in a chain of additional information. A name and an address, for instance, may be interesting, but only if they are used in the context of an offer, an order, or a purchase order. And even more so, when payments are posted to individual transactions. This is, of course, only a small example. Data only appear conclusive if this form of correlation is understood.
You always need a summarizing signal to tell you how to evaluate any given status. This has to be quickly apparent and understandable because only then will the whole become conclusive and thus correct. However, this is already the preliminary stage for our next step.
Conclusiveness refers to the processing of individual operations. This is also a form of analysis where information is summarized and forwarded. Just think, by way of example, of the netted position lists per customer, the total annual revenue, etc. displayed directly in the customer mask.
The actual interpretation, however, happens in the statistics and reports. Here, large amounts of data are consolidated, grouped, and displayed depending on their meaning. The data are only good when the result of these processes is both sufficient and understandable.
It is not easy to distinguish the individual parameters, since everything is so closely related. Analyses alone are helpful and necessary, but here they are an integral part of further processing.
A system like Odoo is a rather extensive data kraken that does not only help to transfer data within departments, to control the subsequent necessary steps, and, ultimately, to make them transparent, but also to process them for sales and marketing.
In both cases, knowledge of the customer base is essential. Let us find out why.
Good customer, bad customer – how much attention do I give to whom? Who has priority? How much does the customer owe me? These are essential questions in sales. If a customer calls, and I already have the relevant data, I save on ”postage,“ as it were, and do not need to chase him up by phone.
Here, things become more complicated because marketing requires correspondingly larger amounts of data for a very specific purpose, such as newsletters, campaigns, or trade fairs. Consequently, both customer profiles and transactions are selected according to various criteria. This process a) necessitates complex queries, and b) has to be filtered additionally at different points.
In this case, the data can only be called good if large amounts of information have been processed, and the campaign has been successful.