Loading

Data Optimization

Good energy data is hard to find, but essential for decision-making at every stage in the deal lifecycle.

Millions of professionals every day waste precious time and money randomly Googling or paying outrageous prices for data access.

At ENIAN we’re passionate about making good energy data usable and affordable for every professional in the value chain so that deals move faster. With data collection and analysis automated, professionals are free to focus on getting quality deals done fast at low cost.

Here’s how we’re harnessing the power of big data and machine learning to generate serious time and cost savings for our customers:

Calibrated NASA & ESA satellite data

We use MEERA-2 (NASA) and SARAH-2 (ESA) satellite data to provide an instant view of the available solar irradiance or wind speed resource for any location in real-time.

Irradiance and wind speed data is combined with local data and calibrated automatically by our algorithms to calculate an existing or planned project’s energy output.

Optical image-recognition

Asset- and grid-level data are mined from hundreds of government and academic sources and verified by geocoding (the process of entering an address and receiving a coordinate) with the use of an external API.

Satellite imagery is then passed through a machine learning image-recognition model to detect for solar panels and wind turbines, marking all coordinates that the model detected panels and turbines for. The database is then checked by a small team of PhD-level analysts for quality assurance.

Levelized Cost of Energy prediction
with a neural network

We use a bottom-up approach to predict levelized costs based on a multivariate input layer with three main parameters: project latitude, project longitude and project capacity. The output of the model is levelized cost of energy.

Between these input and output layers there are two hidden layers, each consisting of one hundred linear nodes with rectified linear units. During training, the weights in this network are optimized using stochastic gradient descent and a mean-squared-error cost function.

High-Performance
Computing Lab

Our in-house machine learning geared lab environment has been created for the purposes of rapidly testing our machine learning algorithms using GPGPUs (General Purpose Graphics Processing Units) at low cost without incurring the overhead charged by cloud providers.

We’re able to very rapidly scale the functioning of our algorithms and test them on large datasets while maintaining the low latency of an internal network.

Sound good?