Singularity in robotics

Introduction

Singularity is that point we fear would end us. But before we go on further I want to tell us about humanity.

Back in college, my friend and I would go to the chess palace to have a nice time over a game or two. On a particular day, I met him at the palace, and I thought we would have the usual. He had no choice as it was our routine.

As we played, I knew something was up, He was definitely having a bad day, the cause I do not know. I knew I had to do something to bring smiles up again, so I began with sloppy moves, and costly exchanges. In no time, he was lit up again. Though we all knew what was going on we enjoyed the moment. After the game, he told me, “thank you, I needed that”.

And that’s being human – consciousness and emotion played out no matter what – can it be built?

Can a computer be conscious or emotional?

No one knows for sure. But there is something we do need to understand before we can answer this question, SINGULARITY.

What is singularity?

Singularity is a term from mathematics used to described a point after which things makes no sense.

In the context of artificial intelligence/robots, it is the point where human intelligence does not make sense anymore because there is an infinite intelligence gap between robots and human called intelligence explosion.

Wait…Back up a little,

Human, build robots both the unintelligent and intelligent ones, right?

So what, singularity is telling us is this:

At a point, humans would build a super intelligent robot (Artificial General Intelligence) whose intelligence would match or surpass that of human. But not only that, that this super intelligent robot would be able to self-improve itself and go on to build a better version of itself.

Thus, the super intelligent robot – since it can self-improve – would build a robot wiser than itself and that new robot would build one wiser than itself and on like that – though it may still be a single robot like version 2.0, 3.0. But you get the gist, right?

Now, after much self-improvement we would have a robot that is much wiser than humans that any intelligence that humans display would not make sense to the environment.

Note it is not that we would be dumber, it is just that machines will be so much intelligent our intelligence would be outdated, more like celebrating the purchase of a cellular phone in 2019.

And that’s what is called “intelligence explosion”.

But the point of concern is more than that. It is about domination after intelligence explosion. It is about “who will rule who”, the “wiser” or the “creator”?

That my friend is our worry.

This thinking started with John Van Neuman – a guy with many talents, who talked about progress in technology getting to a point where human affairs would cease to exist . Then we have J. Good’s essays on intelligence explosion, and people like Vernor Vinger and Ray Kurzwell who predicted that singularity would be by 2023 for Vernor and 2045 for Ray.

But domination is more than intelligence a premise which Yann LeCun, Chief AI Scientist at Facebook agrees with. He said, “the desire to dominate socially is not correlated with intelligence”. Domination is more humane than we could conceptualize to put into algorithms as of today.

Which brings me back to my story at the beginning. Understanding emotions, and consciousness of more variables than the chess board, is what makes us human.

It all comes down to the fact that can we make robots socially competent? Can we make them more conscious? And Can we make them more emotional?

From lessons learned in history class, we understand that emotion is one of the crucial elements in overthrowing any dynasty or empire. So, they [robots] would need this, and since we are still building robots, they can only be emotional if we can embed emotions in algorithms.

And as of today, emotions are still further than we imagine. Some researchers are even contemplating about whether we are truly building intelligent systems or not – but that’s a story for another day.

So, be calm.

To give you something more. Do you know that big robot doing parkour from Boston Dynamics, Atlas, is not using artificial intelligence?

Conclusion

Terminator series is one of the first movie we saw about robots even before we read of the three laws of robots – which we don’t know if they would follow. But how long we would take to get to terminator-century we may never know?

Singularity is more humane than lines of code can clarify. It is when we can for sure build more than intelligent robot who knows when to lose a game to bring a smile to a friend face. It is when it can be conscious enough to play a lady’s boyfriend to saver her from awkward conversation.

Though we do not know when this day would come, my advice, keep it in mind [robots might actually be more human someday], but don’t lose sleep over it. And in the meantime be part of it and see them [robots] more as collaborators than domineering.

Thank you for reading.

Peace

Leave a Comment

Your email address will not be published. Required fields are marked *