Research Ex Machina


We hear less of Chat GPT today. One year ago, it was the hot topic of all the coffee talk happening between scientists. Debates were raging online: how to change guidelines on research article writing, publishing and abstract submission? How deeply wrong was the AI bot about technical topics? Was it going to make software engineers the new unemployed? But the wave was wearing out. Computer scientists started to integrate AI in software, and apps, and made it part of their routines. We understood that software engineers would not be put to sleep, at least not right now.

There was another concomitant wave, ridden by our fear of fake news. What if scientific publishers were now bombarded with articles written by soulless iron-clawed graphic cards? Would the machines invent microscopy pictures from scratch, to support devious claims about miraculous drugs and flawless controls?

I was feeling the pressure, too. After I had left the institute, I was hearing about AI again on the French radio, in the dark corners of my kitchen and above the steaming soup. It was a thing for scientists, and it was also a thing for the general public. At night, I could not help but think of all those server farms, full of purring green computers, working in swarms to slay all the tasks that we did not want to perform anymore. They were writing our novels and our music, out of nowhere, over-fitting sentences that had been written identically thousands of times, stacking notes along the same chords used by generations of efficient pop hooks.

For my colleagues and I, the use of this new tool boiled down to an extra-arm for writing (especially bureaucratic emails) and faster coding. What would we gain from this extra-time? The outcome for our productivity was still unclear, but with the coming of age of AI, we would now read more and more letters that are sent to us by machines, operated by other humans to save a little bit of time at the expense of the humanity of their receivers.

“Dear Prof. X.,” would start the bot, “we are immensely honored to invite you to present your ground-breaking results to our next conference. We have been, indeed, hugely impressed by your recent publication in Journal of Great Scientific Discoveries.”  Professor X. would frown after the first lines, fearing the trap of a predatory conference that would swallow the mini grant they had painfully won at the last internal call of the university. However, reassured by the over-abundance of correctly spelled words, they would probably keep reading.

“We hope you will grant our humble gathering with your genial presence. Please feel free to bring your shoes and we will make sure they are made shiny by the volunteering students who are helping with the menial stuff.”

A lot of syrupy words, crowned by a short hallucination – was the bot fed inadvertently with some critical theory pages? Professor X was surprised by the quality of the writing. At the end of the e-mail, there was a signature that brought back a face from memory. The face belonged to a conference some years ago. Professor X had met with it over a couple of pink martinis, in the evening following the keynote lecture. The face was blurry, without features, surrounded with fuzzy hair and thick glasses. However, what X. could remember was that the scholar was not a shoe-shiner, but rather a direct mind who was not afraid of blunt criticism professed with a thick accent. Professor X. felt somehow heavier, that the lines from the suspicious email would not give away more of this person they had appreciated in a semi-alcoholic vapor. They would rather read an email with spelling mistakes and foreign structures than a suite of long and pompous verbs produced by some private automation housed over-seas.

I am a romantic, like Professor X.. When I was a kid, I thought the faces of others were veils stretched on void embedded in bone. I did not know that the voice that was speaking from the darkness of my brain was not alone, and that all those human props that were reacting to my tears by presenting food or love were also inhabited by tiny ghosts in the machine.

But then, slowly, amid cruel fights in school yards, empathy can start to develop as one discovers the emotions that others also have. Soon, they will be confronted to the existence of multiple realities existing side by side, one for each subjectivity and one for each emotional take on a given lapse of time. This empathy has changing geometry: it is not the same with friends, acquaintances, or strangers. Kids become adults and as adults, they have to work and as co-workers, they sometimes are tricked into denying the humanity in their colleagues just as their younger version did with the protagonists of their infant world. Indeed, even in creative professions like scientific research, specific tasks must be achieved, and if they are to be achieved only in a certain way, then they can be assimilated to a piece of code, fed to a machine who is supposed to perform the code as instructed. The more of a micro-manager one is, the more machined become the managed ones. We want to be as efficient as possible, and we want the others to be more efficient. We look at our colleagues as if they were zero-order machines. We lose our own humanity in the cranks working for the deliverables.

Machines are not sensitive to rewards but if they were, I bet they would be sensitive to rewards based on their efficiency. At least that is how humans might create them. Humans also feel good when they are efficient, and when at the end of the day they can remove some items from a list that never goes down to zero. In this list, there are tasks, bits of achievable life that the brain has carved out to make us stand the progression of time. On the other side, away from efficiency, there is beauty. Along my scientific journey, there are more opportunities for relationships based on standardized exchanges and expectations. There are less opportunities to make friends, or just to have a curious exchange with another person based on emotional or intellectual grounds, or both. Is it just growing up, or is it growing up in a machined society?

If I was inconsistent in my beliefs and a good computational scientist, I would train a machine-learning algorithm to detect AI-based lingua, these highly standardized words that have been around for a little bit more time than Chat GPT and destroy every bit of originality that can exist in scientific work. I would do it to prove that those sentences are so stereotypical that they belong to the language of the machines, and not to the language of human beings.