-
-
Notifications
You must be signed in to change notification settings - Fork 35.9k
Add Essent dynamic price integration #157010
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # These will continue running regardless of API success/failure | ||
| coordinator.start_schedules() | ||
|
|
||
| entry.async_on_unload(coordinator.async_shutdown) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we have this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The coordinator uses custom scheduling instead of update_interval for two reasons:
- API fetch: Runs hourly at a random minute offset (0-59) to spread load across all users
- Listener tick: Runs exactly on the hour to advance sensors to the next tariff using cached data (no API call)
These timers are registered via async_track_point_in_utc_time and need to be cancelled on unload.
Is there a better pattern for this, or a way the base coordinator can handle this?
|
Please take a look at the requested changes, and use the Ready for review button when you are done, thanks 👍 |
| @property | ||
| def extra_state_attributes(self) -> dict[str, Any]: | ||
| """Return extra attributes.""" | ||
| now = dt_util.now() | ||
| data = self.coordinator.data[self.energy_type] | ||
| tariffs: list[dict[str, Any]] = data["tariffs"] + data.get( | ||
| "tariffs_tomorrow", [] | ||
| ) | ||
|
|
||
| # Find current tariff | ||
| current_tariff = None | ||
| for tariff in tariffs: | ||
| start, end = _parse_tariff_times(tariff) | ||
| if start and end and start <= now < end: | ||
| current_tariff = tariff | ||
| break | ||
|
|
||
| attributes: dict[str, Any] = {} | ||
|
|
||
| # Current price breakdown | ||
| if current_tariff: | ||
| groups = { | ||
| group["type"]: group.get("amount") | ||
| for group in current_tariff.get("groups", []) | ||
| if "type" in group | ||
| } | ||
| attributes.update( | ||
| { | ||
| "price_ex_vat": current_tariff.get("totalAmountEx"), | ||
| "vat": current_tariff.get("totalAmountVat"), | ||
| "market_price": groups.get("MARKET_PRICE"), | ||
| "purchasing_fee": groups.get("PURCHASING_FEE"), | ||
| "tax": groups.get("TAX"), | ||
| "start_time": _format_dt_str(current_tariff.get("startDateTime")), | ||
| "end_time": _format_dt_str(current_tariff.get("endDateTime")), | ||
| } | ||
| ) | ||
|
|
||
| return attributes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't want to put big objects in state attributes as they're stored in the database as well. So ideally they should be entities, or service calls with a return value, but that'd be a different PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the heads-up. Right now the attributes only include a few scalar breakdown fields and start/end times; we’re not storing full tariff lists or raw payloads. If you’d prefer those breakdowns as entities or a service, I can move them, but the current attributes are intentionally minimal to avoid DB bloat.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's remove them for now, we could look at this in a later PR if you like
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If i break them out in separate entities, disabled by default, would it acceptable? (especially since I'm refactoring to use descriptors, wouldn't be a huge change.)
I really want them in as this price breakdown really is the USP of this integration. You wouldn't be able to separate the market / tax component anymore.
start_time / end_time can go as this will be captured by the recorder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can have many entities, and we can disable them by default.
At this point I'm like, you know this info, let's try to ideally remove the state attributes, and the rest we can get via the action call.
The reason we also want this is because people want different data and want to calculate things differently. This way users can do that without unnecessarily storing extra data in the database
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, i'll remove the attributes, and add them back as entities in a follow up PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I decided against waiting for a second PR, because ultimately without these entities I just can't support this PR. It loses important breakdown values that are provided by the Essent API (and can't be calculated by the way). And it is a minor change to include in the PR (now we're using entity descriptors).
They are now added as disabled entities. This should address your concerns regarding database bloat, whilst still remaining full functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh but this is fine, like as long as we don't put large objects in the attributes this is fine. Now we have something that is contextual. I would advice to add more context to the entities tho, like entity_categories
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok good. But I have to admit I'm confused to what you mean with "don't put large objects in the attributes". What are large objects?
I had some scalar values and 2 datetime values.
As a data engineer, I think that is peanuts. But you keep referring to them as large objects.
I'm not a HA expert - I'm very much getting my feet wet with this first core integration, but it has been interesting - so there might be some limitations of the recorder I'm not aware of so I haven't pushed back.
Remove unnecessary form step that showed an empty modal during setup. The integration now creates the config entry immediately and proceeds directly to area assignment, matching the behavior of similar zero-configuration integrations like EnergyZero and Nord Pool. Also remove the unnecessary create_entry message added in the previous commit, as it's not needed for simple integrations.
- Fix ConfigFlowResult return type in config_flow - Fix AddConfigEntryEntitiesCallback type in sensor platform - Add type narrowing for parse_datetime to handle Any | None - Simplify entity creation using list literal instead of loop
- Add 12 new tests (22 total, up from 10) - 6 coordinator error path tests - 6 scheduling logic tests using HA time utilities - Improve coverage from 84% to 96% (exceeds 95% Silver requirement) - Properly test time-based scheduling with async_fire_time_changed - Make scheduling fixture opt-in rather than autouse - All Silver tier requirements met (7/7 done, 1 exempt) - Update quality_scale from bronze to silver
|
@joostlek thanks for your constructive criticisms and nudging me in the right direction. I'm learning a lot about what makes a integration robust and what are good design patterns (PyPI, strong typing, entity descriptions). These changes have surfaces some bugs with unexpected API responses that I've now also been able to preemptively tackle in this PR. |
joostlek
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comments in between
| @property | ||
| def extra_state_attributes(self) -> dict[str, Any]: | ||
| """Return extra attributes.""" | ||
| now = dt_util.now() | ||
| data = self.coordinator.data[self.energy_type] | ||
| tariffs: list[dict[str, Any]] = data["tariffs"] + data.get( | ||
| "tariffs_tomorrow", [] | ||
| ) | ||
|
|
||
| # Find current tariff | ||
| current_tariff = None | ||
| for tariff in tariffs: | ||
| start, end = _parse_tariff_times(tariff) | ||
| if start and end and start <= now < end: | ||
| current_tariff = tariff | ||
| break | ||
|
|
||
| attributes: dict[str, Any] = {} | ||
|
|
||
| # Current price breakdown | ||
| if current_tariff: | ||
| groups = { | ||
| group["type"]: group.get("amount") | ||
| for group in current_tariff.get("groups", []) | ||
| if "type" in group | ||
| } | ||
| attributes.update( | ||
| { | ||
| "price_ex_vat": current_tariff.get("totalAmountEx"), | ||
| "vat": current_tariff.get("totalAmountVat"), | ||
| "market_price": groups.get("MARKET_PRICE"), | ||
| "purchasing_fee": groups.get("PURCHASING_FEE"), | ||
| "tax": groups.get("TAX"), | ||
| "start_time": _format_dt_str(current_tariff.get("startDateTime")), | ||
| "end_time": _format_dt_str(current_tariff.get("endDateTime")), | ||
| } | ||
| ) | ||
|
|
||
| return attributes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can have many entities, and we can disable them by default.
At this point I'm like, you know this info, let's try to ideally remove the state attributes, and the rest we can get via the action call.
The reason we also want this is because people want different data and want to calculate things differently. This way users can do that without unnecessarily storing extra data in the database
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd recommend taking a look at tests like mealie.
- You can do a lot with normal
patchinstead ofmonkeypatch - Don't patch internals
- Don't touch internals (so we don't test the coordinator directly, we test the behavior of the coordinator via the entities it connects to)
- Test files take the name of the main module generally, so
test_integrationandtest_schedulingisn't generally a thing
But please only look at these until you're happy with the main code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I refactored the test suite. I'm not aiming for platinum standards here. Hopefully the PR is moving in the right direction to be merged soon.
Co-authored-by: Joost Lekkerkerker <[email protected]>
Co-authored-by: Joost Lekkerkerker <[email protected]>
Co-authored-by: Joost Lekkerkerker <[email protected]>
Co-authored-by: Joost Lekkerkerker <[email protected]>
Proposed change
Add integration for monitoring Essent dynamic energy prices in the Netherlands.
Full disclosure: I work at Essent, but this integration is my personal contribution and is not officially provided by Essent N.V.
Type of change
Additional information
Checklist
ruff format homeassistant tests)If user exposed functionality or configuration variables are added/changed:
If the code communicates with devices, web services, or third-party tools:
Updated and included derived files by running:
python3 -m script.hassfest.requirements_all.txt.Updated by running
python3 -m script.gen_requirements_all.To help with the load of incoming pull requests: