# Robotics: a new algorithm for speeding up processing times

## 8 February 2022

The hundred or so lines of code developed by Guillaume Moroz are likely to be of interest to a good number of engineers in robotics and control theory. A researcher from the GAMBLE project team, Moroz has designed an algorithm for optimising the numerical evaluation of polynomials, a “free and relatively straightforward to implement” solution which he will present in early February at the prestigious FOCS conference.

**A new processing tool for the theoretical computer science community**

**Doing better with less**: this principle of optimisation is applied quite brilliantly by Guillaume Moroz – a researcher from the GAMBLE project team, a joint undertaking of Inria and Loria – in a paper that he is set to present between 7 and 10 February at **the FOCS (Foundations of Computer Science) symposium**. Sponsored by the IEEE Computer Society,

**this transdisciplinary conference is one of the most important in the field of theoretical computer science**, bringing together specialists in algorithms, data structures and information technology.

*“To be selected for FOCS, results must have intersecting benefits and speak to multiple scientific communities. This is very much the case with my article, which is aimed as much at experts in computer algebra as it is at experts in robotics or artificial intelligence.”*

**What Guillaume Moroz has devised is a new method for solving one of the most fundamental problems in theoretical computer science: the numerical evaluation of polynomials, i.e. processing a sequence of additions and multiplications.**

*“This paper was turned down by another conference, felt to be too long and too complex”*, admits the author, who is glad that he followed the panel’s recommendations before presenting a second version to FOCS.

**Fewer than 100 lines of code to solve a 50-year-old problem**

**In 92 lines of code written in the programming language Python, Guillaume Moroz has formulated “a new method that will be quicker at finding the roots of polynomials in a numerical context. **

*Through his research, this expert in computer algebra and computational geometry has devised*

**Processors only carry out additions, subtractions and multiplications, leaving out more complex functions such as sines or cosines. When you want to start modelling the real world, polynomials come into play. Even epidemiological studies sometimes draw on polynomial equations.**”**a method specifically optimised for robotics**. This is a field where polynomial functions are omnipresent and in which so-called “approximate” calculations are carried out, i.e. calculations where a small amount of uncertainty is tolerated in order to take into account variations (distance from an object, expansion of a material, etc.) so as not to hinder the smooth running of an operation. “

*We know that fast algorithms, which were developed in 1972 to evaluate polynomials, are numerically unstable, with approximate operations*”, says Moroz, whose code is based on a fast Fourier transform (FFT), a famous algorithm dating back to 1965 when “optimisation” first appeared in the lexicon of computer science.

**One small step for man, one giant leap for processors**

**The concept behind the algorithm developed by Guillaume Moroz is to avoid redundant operations in order to reduce the amount of processing, thereby improving processing times.** *“My code can be used to evaluate a polynomial quickly on multiple points, without sacrificing precision. The algorithm pools operations between evaluations in order to avoid having to start over again from zero. To a degree of 25,000 roots, it isolates polynomial roots ten times quicker than existing software”*. And he has the proof to back it up in his article. Moroz’s work has earned the praise of Fredrik Johansson, a researcher from the LFANT project team at Inria Bordeaux:

*“This leads to massively faster computation of polynomial roots. […] Moreover, the source code is available (and the code is really simple)”,*shared on Twitter:

“My algorithm can be made parallel extremely well”, explains Guillaume Moroz.“It is relatively straightforward to implement.”

**An open-source, ‘copyleft’ code**

**Published on HAL, the code for the algorithm is available to all**. This was an easy choice for its creator:

*“It’s only logical to share the results of publicly-funded research with as many people as possible.”*Guillaume Moroz has made his code available under a GNU GPL licence, making it free and ‘copyleft’:

*“If someone uses it, then they must respect the same licence and make their software open-source as well.”*Interest hasn’t been slow in coming. Indeed, one Microsoft employee implemented the algorithm in a graphics processing unit:

*“I don’t know any more about the end purpose, but*

**my code can be used in ray tracing, for instance, which is the representation of images using rays of light.**”**Soon to be implemented in computational science software**

Guillaume Moroz is currently working on an official collaboration with Maplesoft, the publisher of Maple, computer algebra software that is well-known to researchers and engineers in robotics and control theory: *“We currently have a contract for incorporating my method into the software using a prototype written in C. Any improvements made will be the property of Inria, the overarching objective being to improve the code in the public interest.”* The researcher is also working on incorporating his code into Sage, *“Maple’s open-source counterpart”*, and is delighted at how his algorithm has been received: *“It’s quite rare for there to be such a brief window between a theoretical article being published and technology transfer.” *

*Source : Inria.fr*