Use Mixer for reimbursement view tests#171
Use Mixer for reimbursement view tests#171cuducos merged 6 commits intookfn-brasil:masterfrom caiocarrara:use-mixer-for-fixtures
Conversation
|
@cuducos take a look. More than a pull request this is a proposal to improve tests. |
|
Changes Unknown when pulling a35b6d9 on cacarrara:use-mixer-for-fixtures into ** on datasciencebr:master**. |
cuducos
left a comment
There was a problem hiding this comment.
Hi @cacarrara — that's awesome! As I told @berinhard elsewhere I was kind of lazy/accommodated in addressing fixtures with a proper tool. mixer sounds like a really concise and straightforward way to address it! Many thanks for that.
I made some comments. Most of them concerns this PR. Some are more overarching towards the project (say, using mixer everywhere). Please, let me know if you want to handle these overarching edits, otherwise I wait for the changes related strictly to this PR, then merge to a secondary branch where I can cover the other tests before bringing it to master.
.travis.yml
Outdated
| - psql -U postgres -c 'create database "jarbas";' | ||
| - yarn install | ||
| - python -m pip install -r requirements.txt coveralls | ||
| - python -m pip install -r requirements-test.txt coveralls |
There was a problem hiding this comment.
We're trying to install coveralls twice. I'd change the strategy here:
- Rename
requirements-test.txttorequirements-dev.txt - Include
-r requirements.txtas the first line of in the newrequirements.txt - Move
django-test-without-migrationsfromrequirements.txttorequirements-dev.txt - Use only
requirements-dev.txtin.travis.yml
| from django.core.cache import cache | ||
| from django.shortcuts import resolve_url | ||
| from django.test import TestCase | ||
|
|
There was a problem hiding this comment.
Unnecessary line: mixer can be imported in the sabe block of Django import (third party imports, according to PEP8)
| for fixture in data: | ||
| Reimbursement.objects.create(**fixture) | ||
|
|
||
| mixer.cycle(5).blend(Reimbursement) |
There was a problem hiding this comment.
Actually the 5 reimbursements created before had controlled differences to make the tests more precise. Surely the previous code was poor and yours is way better. So… just to confirm: 5 here is quite arbitrary (before we need five due to the combination of differences to properly test API requests), right? I mean… when we need actually with a specificity now we create on the test itself, not during setUp, is that right? If so we might reduce from 5 to… whatever, 3. What do you think?
| ('order_by', 'probability'), | ||
| ('suspicious', '1'), | ||
| ) | ||
| url = '{}?{}'.format(self.url, urlencode(search_data)) |
| from mixer.backend.django import mixer | ||
|
|
||
| from jarbas.core.models import Reimbursement | ||
| from jarbas.core.tests import sample_reimbursement_data, suspicions |
There was a problem hiding this comment.
If we're replacing this fixture, we can cleanup jarbas/core/tests/__init__.py and use mixer in other tests that depends on them… using mixer sometimes IMHO tends to lead to a bigger mess ; )
| if not hasattr(self.reimbursement, result_attr): | ||
| continue | ||
| expected_value = getattr(self.reimbursement, result_attr) | ||
| self.assertEqual(str(result_value), str(expected_value)) |
There was a problem hiding this comment.
This logic is pretty clever but I think it makes it less precise what we're are actually testing… IMHO it's a good idea to keep logic extremely simple in tests to make them obvious. That's why here I prefer the long dict: in a glimpse one can know what to expect as result of test_contents, the proper types etc. What do you think?
There was a problem hiding this comment.
@cuducos, good points. I'll gona try do not be too extense.
I do not agree completly about that. The test here should test that each API response attribute is equal to the object attribute in database. It's what being done properly. Of course:
response = load(api_response)
mydict = dict(one=1, two=2)
assert mydict == mydict
is in a first glance such more obvious than:
response = load(api_response)
mydict = dict(one=1 two=2)
for key, value in mydict.items():
return assert response[key] == value
However I still think the positive consequences are higher than negative ones. Think about changing (for any reason) the response content structure (not values). If we change an attribute name or even any attribute data type. We'll have to adapt the tests only because we change some implementation specificity, not a business rule, nor an application requirement. On the other hand, if we change some business logic that makes API response different from object attribute value, the test still will fail.
I think the tests should be abstract and resilient enought to test requirements properly without being a pain for developers.
It's just my thoughts, let me know what you think.
ps.: about this topic, I would recomend this post: http://blog.cleancoder.com/uncle-bob/2017/05/05/TestDefinitions.html
There was a problem hiding this comment.
If we change an attribute name or even any attribute data type. We'll have to adapt the tests only because we change some implementation specificity, not a business rule, nor an application requirement.
If we change an attribute name we broke the JSON API. So… yep, I think a failing test would be welcomed in that scenario ; )
There was a problem hiding this comment.
In addition to my former comment:
The test you're suggesting is indeed interesting, but as a test_context (i.e. testing the data we're getting from the model and passing to the template). But with the class-based view from DRF this step is abstracted (and consequentially not tested).
What test_contents is actually testing is if the JSON output is what our frontend expects. So even if we change some attribute name, we can do some tweaks to avoid breaking compatibility with the frontend (i.e. tests will pass because JSON won't change) — or, surely better, have a failing test to remind us we've broken something and the JSON is not what we expected it to be anymore, so we can adapt the frontend and our expectations….
There was a problem hiding this comment.
I got your point and you're right. I was considering the test_content as an unit test, but it's an integration test actually. I'm going to do it.
| @patch('jarbas.core.models.head') | ||
| def test_fetch_non_existing_receipt(self, mocked_head): | ||
| mocked_head.return_value.status_code = 404 | ||
| cache.clear() |
There was a problem hiding this comment.
Any specific reason to remove that?
There was a problem hiding this comment.
I had understood it was only preparing test to fetch an exiting receipt from a reimbursement without a receipt. So I did the proper preparation instead of clearing cache, what seemed to be an workaround. Since now is easy and simple create new objects I did it.
Was there any other reason to clear the cache?
There was a problem hiding this comment.
Hum… if for some reason the URL was accessed before the cache might fools the test. As we have no control of tests ordering, this might happen… but it's not a sure thing. Let's remove it and if this become a problem we put it back ; )
jarbas/core/models.py
Outdated
| return list(map(lambda x: cast(x), parts)) if cast else parts | ||
| except ValueError: | ||
| # this is far from good | ||
| return list() |
There was a problem hiding this comment.
I'm assuming the except ValueError might happen in cast, right? If so, I don't think we should silence this error. If the user is trying to convert something unconvertible it's better to stop temh than to loose data silently.
|
@cuducos do you think we should change all tests within only one PR? I think we could do it one PR by test file. It could be easier and better to review/approve so we could minimize chances for errors. Have temprorarily only some tests running with mixer seems do not raise any issue. I can handle the tests refactoring. |
| @patch('jarbas.core.models.head') | ||
| def test_fetch_non_existing_receipt(self, mocked_head): | ||
| mocked_head.return_value.status_code = 404 | ||
| cache.clear() |
There was a problem hiding this comment.
Hum… if for some reason the URL was accessed before the cache might fools the test. As we have no control of tests ordering, this might happen… but it's not a sure thing. Let's remove it and if this become a problem we put it back ; )
|
Changes Unknown when pulling 5268291 on cacarrara:use-mixer-for-fixtures into ** on datasciencebr:master**. |
|
@cuducos and @jtemporal the changes was done. Take a look 😜 |
|
Wow! Many thanks, @cacarrara <3 |
This change makes use of mixer[1] for fixtures in Reimbursement view tests.
[1] - https://github.com/klen/mixer/