Note: I attached hyperlinks directly into the text and not as references.

In recent years, there’s been an odd trend in the tech world, particularly with front-end technologies.

We have a parade of brilliant engineers, but they are marching in circles over the ground we have already covered, doing the same thing each time, a little more abstractly than the last time we, as engineers, first saw it.

There is nothing inherently wrong with the existing technologies (in fact, I won’t mention a single one here) that we have used in the past. Still, I find it odd so much brainpower is devoted to solving problems that were already effectively addressed a decade ago (or more!).

It’s like watching a bunch of mathematicians trying to prove the Pythagorean theorem over and over again.

This reminds me of Jacques Ellul’s insights in “The Technological Society”.

In a way, we’ve got a bunch of builders, each claiming they’ve invented a new type of nail when we really need someone to figure out a better way to build houses. Focus, for example, in the complete dumpster fire of the scientific journal academia.

Might now be stuck in this loop of reinventing the wheel, not because we need a new wheel, but because creating it gives us a sense of accomplishment. Which it’s ironic.

We can now claim LLMs were one of the latest “achievement of the technological society” when, in reality, it was proved by Shannon in 1948 that English, and almost any human language, was largely predictable. Strictly not the same as having a chat that does stuff for you, I know, I get the point of modern language models, but still:

  • The core of this idea came in 1948
  • In 1958 the perceptron.
  • In 1966, we saw the first (and advanced for the time) chatbot.

I could continue. However, the point is to ask one simple question: What happened then?

Still, these days, we have plenty of research, advancement, and science, but still, it’s incredible how little “information” we can really get.

Even the Attention Is All You Need paper was “demystified,” which, if I hadn’t read the article, I’d claim a good candidate.

Nowadays, if you chat with an LLM, how do you know it will be truthful?

Furthermore, I’ll end with a conclusion in the shape of a question: Where’s or Who’s the next Bell Labs?

Maybe it’s just simply too much information for everyone to care about.

Maybe it’s just the system that enables us to behave like selfless agents.

Perhaps it’s simply just us.