<

Same writing style

Mr. Glass

Mr. Glass

I am of the belief that, as you write your thoughts down about anything that matters to you, the words you pick are important.

Example: ‘cheap’ has a negative connotation, in that it makes the product seem inferior. Instead, use ‘cost-effective’, or ‘affordable’.

This is something folks into marketing, political speechwriting, or any message-conveying profession would understand. However, it’s also a game of cat and mouse. If your company’s marketing department uses the term ‘cost-effective’ to its benefit, it will be mimicked by your industry, and adjacent industries, without much thought. Once it is used by enough corporate-type people, the group of customers / users being targeted by such marketing will catch up to it.

As a consumer, you now understand that ‘affordable’ usually means ‘cheap’. The expectation conveyed by the word ‘affordable’ has changed over time, becoming the same as ‘cheap’. So, there will be another term, and eventually its meaning will wane as well. It is endless.

Or, for a slightly more relevant example in terms of current affairs, saying that “India is one of the few democratic nations that have not voted to censure Russia in the United Nations over its aggression towards Ukraine” implies that India’s actions are in favour of Russia.

If you are not already informed on India’s stance on global issues historically, or its geopolitical policies, or its history of wars and other political conflicts, both domestic and international, to you, it will promote the idea of ‘us versus them’ — that India does not support Europe and the US and is in the “Russian camp”.

Instead, saying that “India is one of the few democratic nations to have abstained from voting on resolutions against Russia in the United Nations over its aggression towards Ukraine” implies that India has chosen to not be on either side.

You may think that these two sentences say the same thing, and you would be right to think that, because they do to an extent. But, the latter excludes the word ‘not’ and ‘censure’; the former is editorialized to use these words and push the reader towards believing that India is ‘not’ in support of the resolutions against Russia. When, in fact, it has abstained — as in, India has opted to have no opinion on the matter of Russian aggression. It’s the tone and implication that differs, leaving you feeling differently about the subject.

The tone of a message and the choice of words make all the difference, especially when this message will be repeated to you several times by news reports across organisations over the next months. It won’t be noticed by most people, but it will stick in their subconscious.


AI will inevitably take over the simplistic content writing work that has been fueling an entire community of outsourced low-effort writers.

I realise that those are harsh words, but most content writing work is subpar at best and downright dangerous at worst. An article comparing the ’10 best blood pressure monitors’ is not focused on researching and reviewing blood pressure monitors — its author most likely lives in a developing nation and the closest they have been to a blood pressure monitor is at their local doctor’s “home clinic”.

Of course, I am generalising; it could also be a doctor who can attest to certain brands being better than others in quality, reliability, or whatever other relevant metric. However, my generalisation is based on probability. A doctor from a developing nation is less likely to be into writing content for random blogs on the internet. It is a profession that earns very well, in the context of a developing nation. Generally, people in developing nations also tend to have fewer hobbies.

That is enough of me justifying my generalisations. Some of it is anecdotal, some factual. I cannot bother myself to add references and citations, at least for the moment.

But, hold on to this thought of hating generalisations for the moment.


What slightly worries me is the advent of ‘assistive writing’. Tools like Grammarly were originally intended to help users improve their grammar. But, in the hands of big tech, everything can become dangerous.

Google recently announced a set of what they deem as improvements for their assistive writing feature, as not only will it provide fixes and suggestions to improve your grammar, but it will now also assist you with:

Word choice: More dynamic or contextually relevant wording
Active voice: Active rather than passive voice
Conciseness: More concise phrases
Inclusive language: More inclusive words or phrases
Word warnings: Reconsidering potentially inappropriate words

This is a dangerous slippery slope.

But we didn’t start slipping on this slope recently. It’s been a while; like, two years ago when GMail added a feature that Google likes to call ‘Smart Compose’. It finishes your sentences for you.

AI, at this moment, is capable of writing articles that only have to be reviewed briefly by a human. It hasn’t yet learned to thoroughly fact-check, but we will get there in a decade or two as well, probably less.

But, the fact that AI can ‘assist’ you with your tone and language for your next news report, your thesis, a note in your diary, or a text message to your significant other, can lead to a sort of paradigm where everybody writes… the same. Or, even more concerning — where everybody writes the way Google wants them to.

When I write, I try to use gender-neutral language. I try to use more common words, so my message is conveyed without the reader having to use a thesaurus. Consciously, I make these choices. Someone with different political, cultural, or moral leanings, would write differently.

Some might not even care about any of this and write whatever words that come to their mind. Google’s AI is for them — generalising their writing into something that is non-offensive and politically correct, but only according to whatever leanings the engineers at Google have.

I asked you to hold on to that hate for generalisations earlier, you may release it now.

In short, even if the next article you read is not written by an AI, be aware that its tone, language, and message could have been “assisted”, with tools produced by a big tech corporation that does not share your values, or rather more importantly, the author’s.

This post was assisted by Microsoft Editor.