News - Projects

A future where robots can build robots.


We now have programs which can configure themselves (machine learning), algorithms which can solve unknown problems through natural selection (genetic algorithms), self-programming machines, algorithms which can generate algorithms (through extraction)... In the future, robots will replace programmers and will even be able to build other robots themselves.

Programming is not an exact science - it involves tremendous brainpower, making use of human intelligence and the associated risks and errors - and creativity. It requires a logical, thorough and experienced mind, as well as technological prowess. It also requires a good deal of inventiveness and intuition to come up with methods and tools to make their life easier, detect redundant or recursive problems, create abstraction layers and deal with problems with ingenuity and elegance.
In a rapidly changing digital world, no one is safe from seeing their job affected by platforms (known as ‘Uberization’) or even replaced by robots. The people who create these platforms or this artificial intelligence will ultimately take control of the world. But for how long? And what if this programming can also be carried out by robots? This is the current trend and various events are converging towards this unavoidable turning point. 
“Once machines have independent thought and can program themselves, that's the turning point”
said Steve Wozniak, co-founder of Apple.

This turning point could well coincide with what some call the singularity - the moment when machine intelligence will surpass that of humans.

[Source :]


Programmers are lazy

The history of programming is a succession of abstraction layers, each layer aiming to simplify the job of the programmer. The first computer programmes were written in machine language (in binary: 0, 1), with punch cards (hole, no hole). This was extremely cumbersome in practice, so programmers have invented increasingly complex languages, from the assembler (a second generation language) up to programme generators (4th gen) and even constraint solvers (5th gen). 

4th generation languages, which are often accessible to non-programmers, are focused on solving a specific problem, for example creating a business application around a database and user interface. Often included in software engineering workshops, they use a syntax similar to natural language and rely on the lowest-level program generators.
5th generation languages aim to solve more general problems without algorithms by using logic programming and constraints. While they are still taught in universities and used in small-scale artificial intelligence projects, they have not achieved the desired degree of success.
4th and 5th generation languages also aim to let the machine carry out most of the programming.

Machines which learn 

Artificial intelligence is not a new field but it has experienced significant growth in the last few years. The increased power of GPUs (graphic card processors used by AI algorithms) combined with the availability of Big Data has led to the growth of machine learning algorithms, and more specifically deep learning. 
Machine learning algorithms are programmed to modify their behaviour depending on the data that is provided to them as an input. They learn by example. We could input a huge quantity of cat photos into an algorithm, indicating by example that “this is a cat”. The algorithm would model the characteristics of a cat. Subsequently, when presented with a photo which it has not seen previously, it will be able to predict whether or not it is a cat. 
Deep learning is an offshoot of machine learning which uses several hierarchical modelling layers to analyse increasingly complex data.
It is used, for example, in language recognition, obstacle recognition for self-driving cars, facial and scene recognition, etc.

Automatic captions: scene recognition is able to describe the context, people and actions contained in a photo. This Facebook R&D project allows blind people to have an idea of what is going on around them. [Source: A snail in Manhattan]
Other than the learning described above, known as supervised learning, there are also reinforcement learning systems (the algorithm is penalised if it provides the wrong solution, and rewarded if it makes the right decision), a system which is used in certain video games. The challenge is now to provide successful unsupervised learning where the output is not specified (“this is a cat”), and the algorithm can manage to deduce what it is.
Reproducing the operations of the human brain is without doubt the greatest dream of artificial intelligence. We are still a long way from achieving this.
DeepMind, a company recently acquired by Google, is attempting to take inspiration from its own learning. The DeepRL system combines deep neuronal networks and reinforcement learning to learn, for example, how to become an Atari 2600 video game champion by itself, just by observing and trying. The AlphaGo algorithm successfully beat a human in the game of Go, something which was unthinkable several years ago, as the game of Go requires intuition and inventiveness.
In all these examples, we can see that deep learning algorithms modify their behaviour independently, with no programmer involvement. 

Evolutionary algorithms

Evolutionary algorithms are inspired by the theory of evolution. They improve their operation by repeating random processes and eliminating results which do not work in order to find the solution to a problem. In a manner of speaking, it learns from its mistakes. Within the family of evolutionary algorithms, genetic algorithms are focused on program generation.
In the same vein, this six-legged robot is able to re-learn how to walk if it is damaged. If it loses use of one of its legs, it can re-program itself in less than a minute to walk again as before, but with one less leg. To do this, it carries out a series of tests to determine which corresponds best to normal operation.

A programme which writes its own code

It is easy to combine artificial intelligence and genetic algorithms to produce a program able to produce its own source code and self-improve. 
This is offered by different projects listed on this site: Self-programming machines. It’s also what is offered by this experiment: Using Artificial Intelligence to Write Self-Modifying/Improving Programs. 
Self-programming has been the subject of various academic studies. For example in the IDeal (Implementation of DEvelopmentAI Learning) project offered by CNRS, the self-programming agent uses a machine learning algorithm. Generally speaking, a machine learning program acquires data and uses them as execution parameters for a pre-defined algorithm. Here, the self-programming agent acquires data which are sequences of instructions and controls execution of its own algorithm with these instructions. The programme learns to programme itself and improve independently. 

The robot which assembles algorithms

The problem with self-learning algorithms is that they have become so complex that no one is able to understand how they work. They are black boxes. “Generally speaking, we do not know what the programs do” admits Stéphane Grumbach, research director at Inria.
For all these systems, a human must write the initial algorithm which will learn and then modify itself. We are still far from a robot which is able to create a program ex nihilo based on a given problem. In the end, we always come back to the same topic - an algorithm which is a pure product of human intelligence.
Now, thanks to technologies like CodeCase Software, it is possible to extract algorithms from an application in the form of a universal meta-language and to recreate a new source code. Imagine, for example, that a robot using this technology could extract the algorithm from an existing program - its own or another - and then improve this algorithm by simplifying it or expanding it by integrating other algorithms, and recreating a more efficient program. The role of the programmer is going to get easier... or change significantly....  
Gartner is focusing on the algorithm, and more specifically the algorithm economy. In its forecasts, the analyst firm sees a post-app future where intelligent agents will assemble specialised algorithms available on marketplaces. Gartner predicts that by 2020 these intelligent agents will handle 40% of interactions, and Microsoft’s strategy will be focused on Cortana, rather than Windows.

The robot which builds robots

This is an entirely possible future if we consider that agents, with current progress in artificial intelligence, will undoubtedly be able to understand the needs expressed in natural language by humans, and subsequently model a solution to the problem and assemble appropriate algorithms. 
When these agents are able to independently identify the problems to solve, and then decide to generate the right program, which will be another independent and intelligent agent, we will be close to reaching a scenario where robots can build robots.

© 2014 Codecase, all right reserved
Credits - Legal notices


Let's meet !