All the world’s energy data in one place.
- How much capacity will a site yield? Where’s the nearest grid?
- How many other players are there nearby? Who owns what, and how’s it doing?
- What are the local market dynamics?
We understand these issues because we've been there.
Energy Search by Enian is made by renewable energy developers for developers, advisors, and investors.
Find your perfect project location by drilling down in our clickable map - anywhere in the world.
We’re passionate about making good energy data usable and affordable for every professional in the value chain. With data collection and analysis automated, you’re free to focus on getting quality deals done fast at low cost.
100 million unique data points organized to work for you.
Here’s how we’re harnessing the power of big data and machine learning to generate serious time and cost savings for our customers:
Built on open access data
Our power plant and grid network datasets are mined from hundreds of open-access government and academic sources and community projects. We enhance these datasets by using the latest advances in satellite imagery and optical image recognition.
Our weather resource datasets are sourced from the World Bank Group.
Validated by Artificial Intelligence
Incoming asset- and grid-level data points are funneled through an automated validation process to ascertain they are paired with correct geolocations and optical image recognition scripts check if the satellite imagery matches the described characteristics of each asset. Additionally we employ advanced web scraping and processing techniques to validate properties.
The database is then checked by a small team of PhD-level analysts for quality assurance.
High-Performance Computing Lab
Our in-house lab environment, geared for machine learning, has been created for the purposes of rapidly testing our machine learning algorithms using GPGPUs (General Purpose Graphics Processing Units) at low cost without incurring the overhead charged by cloud providers.
We’re able to very rapidly scale the functioning of our algorithms and test them on large datasets while maintaining the low latency of an internal network.