“I will be back”

We’ve been writing about AI, emerging technology, and how they’ll change the environment in the future for weeks. This technological journey has been enjoyable, but we believe we have overlooked something. That will no longer be the case; today, we will answer a critical question: Should you be concerned about Artificial Intelligence?

From Isaac Asimov’s Three Laws of Robotics in 1942 to their cinematic debut in I,Robot, artificial intelligence has been at the heart of modern science fiction. According to Asimov’s law: 1- a robot may not injure a human being or, through inaction, allow a human to come to harm; 2- a robot must obey the orders given by humans beings except where such orders would conflict with the First Law; 3- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Even though the laws have changed over time, they have remained a common stepping stone in the creation of robots. The Turing test, for example, is a method of certifying a robot’s intelligence by determining if it can be viewed as a person by another human. The general fear of AI stems from the question, What if we lose control?” That covers the general fear, but as you’ll see, it’s a little early to be concerned about Skynet’s awakening; there are more legitimate and realistic concerns that we’ll discuss here (sorry, we do not have blueprints for a time-machine yet).

This is a concern you may have had in 2008, not because of the Global Financial Crisis, but because of the movie Wall-E, which portrayed a world in which humans no longer function and are dumber than ever due to (spoiler alert) a robot takeover.

Yes, prepare for unemployment😧: To be a little bit more serious, a major source of concern is that, while the previous wave of automation primarily affected blue collar employment such as manufacturing, the current wave would primarily affect white collar service-oriented jobs dependent on information employees. As the use of AI expands and pervades the business world, many parts of the economy will no longer need skilled human employees. Blue collar jobs, such as delivery drivers and cab drivers, as well as many other facets of supply chain, logistics, and manufacturing, are all affected by AI.

But, not yet🥳: The counterargument is that some of these systems aren’t yet capable of efficiently replacing many human employments. Though AI systems have a wide range of capabilities, they cannot function completely autonomously. The majority of effective AI implementations are done in such a way that the AI system serves as enhanced intelligence, assisting the person in doing what they do best rather than completely replacing them. In general, when technological waves change industries and employment, job categories are replaced, but overall jobs are not lost. In reality, jobs are expanding and finding new niches, while computers are simply replacing old ways of doing things.

Yes🤯: Since some Russian leaders have declared that whoever leads the development of AI will be one of the world’s top rulers, it may come to your mind that AI is becoming a weapon. It’s no wonder that countries are investing heavily in AI research and development for a variety of reasons, ranging from military advancement to intelligence systems that can impact the news. We should expect governments to continue to use AI systems in ways that make us uneasy, such as in warfare, surveillance, law enforcement, and other applications. If wars continue to occur in the future, it might be more fun to see robots dismantle each other’s structures rather than humans, but it would undoubtedly be a dystopian future, especially for poor countries.

Fortunately not😅: Although we can expect governments and countries to fight for AI domination, it is not the governments that we should be worried about. After all, the aim of laws and governance is to keep a close eye on government actions. We have plenty to expect from bad actors, terrorists, and mischief-makers who will use AI technologies for their own nefarious purposes. Since AI systems strive to learn from their programmers, this addresses doubts about the creator’s and those teaching the systems’ intentions, as well as what they intend to achieve. The unknown is the source of all anxiety.

🧐What you can be certain of is that regulators would take action in response to AI usage. According to the European Commission’s most recent work on AI regulation, those considering using AI must first determine if a particular use-case is “high risk,” and therefore if a necessary, pre-market compliance evaluation is needed. It is important to remember, however, that labelling an AI system as high-risk for the purposes of this Regulation does not inherently mean that the system or product as a whole will be labelled as such. If you want to read more about the subject we will be happy to help you through this link referring to one of our recent newsletter.

To begin, I apologise, but if you are a fan of the recent reboots, there is no doubt in my mind that they were awful, thus the answer would be yes.

Not at all😎: What you can absolutely fear is that, after a certain point, our brains will simply be unable to keep up with growth, progress, and innovation because things will be moving much too quickly. And to be honest computing systems can very well surpass their human creators. What would that mean for humanity when that happens? It makes us wonder what intelligence really is, how we quantify and interpret intelligence as a term for humans and computers, and how that new meaning can fit into the world now and in the future. However, such a fear assumes that algorithms, AI and everything will achieve the “Artificial General Intelligence” goal and that we, as a species or culture, will be unable to put controls in place to prevent machines from achieving that point.

A little bit🤡: Even if the media and movies have led you to believe that Artificial General Intelligence is just around the corner, we are quite a long way from having metaphysical discussions with an NS-5 (Sonny in I-Robot) as it prepares dinner at home. While much of the technology is improving quickly, there are still some places where it is falling short. Data is still the bedrock of AI, and a lot of it is dirty and dusty. Don’t believe the hype the next time you see “next-gen AI software”; it’s just marketing jargon nowadays.

To summarise, AI is a worrying subject, and you should still be cautious when discussing it, but we are far from a dystopian future (for the moment). If you want to read more about how AI still has an Achilles’ Heel, follow the link!

Investing in European B2B Technology companies