-
Notifications
You must be signed in to change notification settings - Fork 0
add missing tests #85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This reverts commit 5e8ea8b.
Summary of ChangesHello @viseshrp, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the test suite for the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds comprehensive test coverage for the workedon package by introducing multiple new test files covering core functionality, edge cases, and integration scenarios.
Key Changes:
- Added unit tests for core modules including parser, models, configuration, utilities, and CLI
- Introduced edge case testing for parser datetime handling, duration parsing, and tag operations
- Added integration tests covering complete workflows with save, fetch, modify, and delete operations
Reviewed changes
Copilot reviewed 15 out of 15 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/test_workedon.py | Tests for core workedon module functions including work generation, prefetching, date ranges, and error handling |
| tests/test_version_main.py | Tests for package version handling and main module invocation |
| tests/test_utils_edge_cases.py | Edge case tests for utility functions including hash generation, timezone handling, and datetime conversion |
| tests/test_utils.py | Core utility function tests including settings loading, option decoration, and datetime operations |
| tests/test_parser_edge_cases.py | Extensive edge case testing for input parser covering datetime, duration, and tag parsing |
| tests/test_parser.py | Basic parser functionality tests for datetime, duration, and work text cleaning |
| tests/test_models_edge_cases.py | Edge case tests for database models including constraints, cascading, and special characters |
| tests/test_models.py | Core database model tests including migrations, schema creation, and model operations |
| tests/test_integration.py | Integration tests covering complete user workflows and multi-step operations |
| tests/test_exceptions.py | Tests for all custom exception types and their string representations |
| tests/test_default_settings.py | Validation tests for default settings values |
| tests/test_constants.py | Tests verifying expected values of application constants |
| tests/test_conf.py | Tests for settings configuration, file creation, and user settings merging |
| tests/test_cli_branches.py | Tests for CLI module edge cases including debug mode and callback execution |
| tests/test_cli.py | Enhanced CLI tests including new test cases for subcommands, duration handling, and tag listing |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #85 +/- ##
========================================
+ Coverage 86.2% 99.0% +12.8%
========================================
Files 11 11
Lines 544 543 -1
Branches 86 85 -1
========================================
+ Hits 469 538 +69
+ Misses 56 2 -54
+ Partials 19 3 -16 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds a comprehensive suite of tests, significantly improving the project's test coverage. The new tests cover CLI interactions, configuration, models, data parsing, and utility functions, including many important edge cases. The overall quality of the tests is high. I've provided a few suggestions to improve the clarity, determinism, and correctness of some test cases to make the test suite even more robust and maintainable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 14 out of 15 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| start, end = _get_date_range(start_date, end_date, since, period, on, at) | ||
| if start and end: | ||
| work_set = work_set.where((Work.timestamp >= start) & (Work.timestamp <= end)) | ||
| work_set = work_set.where((Work.timestamp >= start) & (Work.timestamp <= end)) |
Copilot
AI
Jan 19, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing the conditional check if start and end: makes the date range filter unconditional. However, _get_date_range can return datetime objects even when no date parameters are provided (it defaults to past week). This change is likely intentional, but if start or end could ever be None in edge cases not covered by _get_date_range, this would cause a runtime error. Verify that _get_date_range always returns valid datetime objects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot _get_date_range always returns tuple[datetime.datetime, datetime.datetime]—there’s no path that returns None. Every branch ends up assigning start and end (with defaults, parsed values, or via the final else), then it raises if start > end and finally converts both to the internal timezone before returning. So the function itself never produces (None, None); any call site that assumed it might is covering dead code.
|
@gemini-code-assist |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request significantly improves the project's test coverage by adding a comprehensive suite of unit, integration, and edge-case tests for various components, including the CLI, parser, models, and configuration. The changes also include a valuable refactoring in tests/conftest.py to make tests more deterministic. I've identified a couple of areas for improvement: one is to disallow empty tags, a minor logic issue revealed by a new test, and the other is to enhance the clarity of a test name in test_utils.py. Overall, this is an excellent contribution that boosts the project's quality and maintainability.
| def test_tag_allows_empty_string_name() -> None: | ||
| # This might be undesirable but tests current behavior | ||
| with init_db(): | ||
| tag = Tag.create(name="") | ||
| assert tag.name == "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test confirms that empty tags are allowed, which is likely not the desired behavior as they don't provide any meaningful information. The comment in the test even hints at this being undesirable.
I recommend disallowing empty tags at the database level for data integrity. You can achieve this by adding a CHECK constraint to the Tag model in workedon/models.py:
from peewee import ..., Check
class Tag(Model):
...
name: CharField = CharField(
unique=True, null=False, constraints=[Check("name != ''")]
)
...Consequently, this test should be updated to assert that creating an empty tag raises an IntegrityError.
| def test_tag_allows_empty_string_name() -> None: | |
| # This might be undesirable but tests current behavior | |
| with init_db(): | |
| tag = Tag.create(name="") | |
| assert tag.name == "" | |
| def test_tag_disallows_empty_string_name() -> None: | |
| # This might be undesirable but tests current behavior | |
| with init_db(), pytest.raises(IntegrityError): | |
| Tag.create(name="") |
Pull Request
Fixes #
Proposed Changes