FABIOLA enables Constraint Optimisation Problems (COPs) in large datasets by using Big Data techonologies in a user-friendly way. It enables to (1) create COP models; (2) integrate different data sources; (3) map dataset attributes to COP-model variables; (4) solve the COPs in a distributed way, and (5) perform advanced queries on the results.
The FABIOLA Big Data layer is based on Apache Spark. The most recent version relies on the COP solver choco-solver. The user interface is composed of a REST API implemented in NodeJS, and a front-end based on AngularJS.
The architecture fullfils the principles of low coupling and high cohesion. The communication among the different modules which compose this architecture is performed through REST APIs. In this way, all components are highly independent, and modifying or scaling any of them causes a low impact on the others.
The deployment of the Big Data layer is performed in a DC/OS cluster. It provides a highly elastic environment, since extending or reducing the amount of availables nodes is a very easy and transparent process. On the other hand, the backend and backend were deployed with Docker images.
A quick tour
Next, a quick tour on the features of this tool is presented.
Creating COP Models
The proposal was tested in an industrial scenario. Several Spanish electricity companies wanted to get the optimal power which each of their customers might contract in order to minimise their consumption. Our study  demonstrated that distributed COPs dastrically improved the global execution time. Including more worker nodes might improve the performance.
 Valencia-Parra, Á., Varela-Vaca, Á. J., Parody, L., & Gómez-López, M. T. (2020). Unleashing Constraint Optimisation Problem Solving in Big Data Environments. Journal of Computational Science, 45, 101180. https://doi.org/10.1016/j.jocs.2020.101180