If I understand it correctly, the starting point of the study of logic is the realisation that propositions/thoughts exhibit a certain formal structure that is independent of their content. This form can be examined as such, for the purpose of discovering the patterns therein as well as for developing a system of analysis that can discern in prior validity from error.
The gains of such a study can be realised in applications that enjoy widespread use in our daily lives. For the purposes of this article, we will consider the example of a text block, its phenomenal presence and formal constitution, with regard to the operation it can perform; a case I have already been wrestling with in the context of a certain WordPress application I am developing for private use.
Let us then venture to apply logic’s form-content dichotomy to the following paragraph:
This is a text block. Its content is of no import to our discussion. The actual words here written, which are in english rather than gibberish, are not meant to be interpreted literally, but only be treated with leniency, in lieu of their actual purpose of filling in the space perceived as adequate for making a piece of text appear as a full-bodied, meaningful paragraph.
On the face of it, this text block has a singular, monolithic presence: it is just a paragraph. Within it all elements appear as consubstantial, as essentially identical. If this were the information one would use to perform a specific function, it would be impossible to qualify in advance and without reference to semantics any particular subset of information: the paragraph is indivisible as is. By considering its formal aspects, we can overcome such an issue, introduce a structure that fosters discernibility of the elements and lay the foundations for processes that can occur even against — or in spite of — the variability of the actual content.
In propositional logic, the study that deals with the structure and interrelations of propositions, a paragraph would be divided into its sentences, with a lower case letter (p, q, r...) assigned to each of them as its identifier. The proposition “This is a text block” would become p, “Its content is of no import to our discussion” would be q and so on. These would then be checked against one another to evaluate the validity or falsity of their connection. In our case, our task is slightly different, for we are not seeking to trace whatever threads of truthfulness may bind these sentences, but solely to introduce an abstract structure that will render the text block a compound rather than an indivisible element.
We can thus formulate a set of definitions regarding the paragraph’s structure, so that each sentence would have its own variable assigned to it; variables that we could then call directly in whatever function we want to perform that would only require subsets of input to be effective. The operation would then become conditional and algorithmic, so that “where p, then x”, “where p, q, r… then x, y, z”, “If A or B and where p, q, r… then x, y, z” etc. For each of the individual components we would strictly require only a subset of the information, implying that the totality — the text block — if used as such would produce inconsistencies and validation errors.
What actually happens is that the formal structure is properly recognised and defined, even if the appearance of the whole is maintained. In the case of our text block, this would mean that instead of having an obscure chunk of text, we would produce a clearly-structured array of elements/sentences that could then be concatenated to maintain the phenomenal qualities of a paragraph. The transformation of the singular to the divisible entity would happen “under the hood”, or else in its formal structure.
To make the example a bit more tangible, consider the specifications each item in a repository can have. Take the case of a video game on an online store. There is the title of the item, a description, the genre it belongs to and its price. These are subsets of the total information about it (let’s assume that’s all there is). If they were presented as a whole, it would be impossible for the purchasing process to work properly, for the “buy now” button/command could not inherit the variables that would correspond to the informational subsets necessary for determining the item and its cost. The system, just like human, cannot be certain of the meaning entailed in an otherwise unclear thought, proposition, paragraph and so on. Either hermeneutics will have to be applied, which is not the most reliable of methods, or the monolithic entity, the undetermined object, will have to be made a compound whose parts can be referenced autonomously.
In conclusion, we were not interested whether a perceived whole is rendered such due to its inner truthfulness. Admittedly, that is our digression from logic proper. We were solely concerned with identifying the formal aspects of a nominal entity, so that it can be treated as a compound even if it were to continue to be rendered to perception as monolithic. In this short article, a practical use of the form-content dichotomy has hopefully been outlined. Perhaps, a more nuanced implication is epistemological, i.e. how the discernibility of the elements contributes to the [situational] certainty/knowledge we may have of them.
Thank you for reading!