Ever since its early days, Ruby on Rails has come with excellent support for testing. Over time, its focus has shifted from unit to integration testing. The functionality for testing controllers has actually been split off into a separate gem. I still like to write tests for controllers and views in my Rails projects. Here’s why.
What to test about Rails controllers
Rails controller tests are used to verify the behaviour of a controller in terms of parameter handling and HTTP interactions. For example, given a typical Rails controller action like this:
class ArticleController < ApplicationController
# POST /articles
def create
@article = Article.new(article_params)
if @article.save
redirect_to @article, notice: 'Article was successfully created.'
else
render :new
end
end
private
# Only allow a list of trusted parameters through.
def article_params
params.require(:article).permit(:title)
end
end
We might want to test a couple of things:
- When we provide valid article parameters, are we redirected to the new article page with the appropriate flash message?
- When we provide invalid article parameters, do we end up on the “new” template?
- When we do save the article, do we only assign the attributes we want to have assigned, and not something like
published_at
?
Testing controllers using RSpec
Tests for controller actions like this tend to look mostly the same. When using RSpec, they might look like this:
RSpec.describe ArticlesController, type: :controller do
describe 'POST create' do
let(:article) { Article.new(id: 1) }
before do
allow(Article).to receive(:new).and_return(article)
end
it 'routes to /articles' do
expect(post: '/articles').to route_to('articles#create')
expect(articles_path).to eql('/articles')
end
context 'given valid params' do
before do
allow(article).to receive(:save).and_return(true)
end
it 'redirects to the article page' do
post :create, params: { article: { title: 'title' } }
expect(response).to redirect_to('/articles')
end
it 'sets a flash notice' do
post :create, params: { article: { title: 'title' } }
expect(flash[:notice]).to eql('Article was successfully created.')
end
end
context 'given missing params' do
it 'raises an error' do
expect { post :create, params: { article: {} } }
.to raise_error(ActionController::ParameterMissing)
end
end
context 'given extra params' do
it 'ignores the extra params' do
post :create, params: { article: { title: 'title', created_at: '2020-01-01 12:00:00' } }
expect(Article).to have_received(:new).with(hash_excluding(:created_at))
end
end
context 'given invalid params' do
before do
allow(article).to receive(:save).and_return(false)
end
it 'renders the new template' do
post :create, params: { article: { title: 'title' } }
expect(response).to have_http_status(:ok)
expect(response).to render_template('new')
end
it 'sets no flash notice' do
post :create, params: { article: { title: 'title' } }
expect(flash[:notice]).to be_nil
end
it 'assigns the article' do
post :create, params: { article: { title: 'title' } }
expect(assigns[:article]).to be(article)
end
end
end
end
It’s a little repetitive, but that’s nothing some good editor templates and snippets can’t help you with.
In terms of regression testing, all of this might also be accomplished using system tests:
RSpec.describe 'articles', type: :system do
it 'creates a new article' do
visit '/articles/new'
fill_in 'Title', with: 'title'
click_button 'Create Article'
expect(page).to have_css('.notice', text: 'Article was successfully created')
end
it 'shows errors when there are validation errors' do
visit '/articles/new'
fill_in 'Title', with: ''
click_button 'Create Article'
expect(page).to have_content("Title can't be blank")
end
end
Arguably, these tests are easier to write — and shorter. Your code coverage report will probably happily inform you that you have 100% code coverage. But are these better tests?
Note: Rails also offers integration tests, which test HTTP interactions against the whole stack, including routing, sessions and view rendering. I regard request specs more an alternative to system tests without a browser, than as an alternative to controller unit tests.
Comparing system and controller tests
In terms of regression testing, I would argue they offer comparable value. Both tests would fail if anything in the implementation would change or be implemented incorrectly. But how would they fail? Let’s say we did not set our flash message correctly, and compare error messages. The system tests would fail like this:
1) articles creates a new article
Failure/Error: expect(page).to have_css('.notice', text: 'Article was successfully created')
expected to find css ".notice" but there were no matches
[Screenshot Image]: /Users/avdgaag/code/tmp/tmp/screenshots/failures_r_spec_example_groups_articles_creates_a_new_article_718.png
# ./spec/system/articles_system_spec.rb:13:in `block (2 levels) in <top (required)>'
While the unit tests would fail with this:
1) ArticlesController POST create given valid params sets a flash notice
Failure/Error: expect(flash[:notice]).to eql('Article was successfully created.')
expected: "Article was successfully created."
got: nil
(compared using eql?)
# ./spec/controllers/articles_controller_spec.rb:28:in `block (4 levels) in <top (required)>'
The second error message is easier to parse and is more helpful with pinpointing exactly where to look for the problem.
Second, let’s review test runtimes. The system test tests your whole stack through a browser. That’s valuable, but it is not fast:
Finished in 0.07465 seconds (files took 1.64 seconds to load)
2 examples, 0 failures
It gets worse when you don’t use rack-test but switch to Chrome headless instead:
Finished in 1.67 seconds (files took 1.7 seconds to load)
2 examples, 0 failures
Compare to the runtime of the unit tests for both the controller and views:
Finished in 0.0688 seconds (files took 1.59 seconds to load)
9 examples, 0 failures
Note that that is the runtime of a 9 controller and view tests. Running them individually is much faster, allowing for a much quicker TDD feedback cycle.
For one or two tests, the differences are negligible. But if you get to thousands of tests, and you usually write mostly system tests, you are going to feel the difference. It’s not hard to build up test suites running for hours. And tests that are too slow are not run. Tests that are not run are not updated. Test that are not updated are not trusted. And an untrusted test suite leaves you with all the costs and none of the benefits of maintaining it.
Second-order effects of testing strategy
More importantly, consider the impact of different testing strategies on the resulting code. So far, we have assumed the implementation code as a given. But test-driven development is called test-driven for a reason. Consider the conceptual distance between test and implementation code in both examples. Controller tests are unit tests, close to the object under test. The system test, by definition, is much farther removed from any particular object in the codebase.
Let’s say you do test-driven or test-first development and you take the outside-in approach, writing the system test first. It’s tempting to take the failing system test and write some implementation code to make it pass. And then some more. The first object you implement to get your system test passing is — again, by definition — disconnected from the test. That sets you on a path of writing disconnected implementation code. It probably feels like you are doing test-driven development, but you are actually missing out on its primary benefits: design feedback.
Conceptual distance of test and implementation code
Unit tests are written in the same conceptual language as the implementation, and therefore much more closely connected. A unit test for a controller deals with parameters and HTTP status codes, not filling out forms and finding HTML content on a page. That means it becomes that much easier to listen to that what the tests are telling you: if a controller is about HTTP, the tests are about HTTP. If you want to write some code for exporting data as CSV, or sending an email, or anything else; that’s going to be hard to express in a unit test.
Take this example controller action that also outputs articles in CSV format:
class ArticleController < AppplicationController
def index
@articles = Article.all
respond_to do |format|
format.html
format.csv do
csv = CSV.generate do |c|
@articles.each do |article|
c << [article.id, article.title]
end
end
send_data csv
end
end
end
end
How would you test that?
Now, at this point, the discussion can go two ways:
- On the one and, some people argue that writing tests like this is too hard to do in a controller test. Therefore, we should abandon them and stick to higher-level system tests.
- On the other hand, other people (me included) argue that this is exactly why you should stick to the lower-level unit test: to show you that this is too hard. That difficulty is the test telling you that you are trying to do too much here. This does not belong here. You need to change your design.
Seams and writing the code you wish you had
If you do find yourself wanting to write functionality that is painful to test in a unit test, you pull the classic TDD-move: you write the code you wish you had — typically calling another object — and then make that work. Meanwhile, you use a stub or a fake. The role of the system test, then, is not to verify individual pieces of logic, but to verify all objects correctly work together.
For example, the previous example of providing a CSV export of articles could also be implemented like this:
class ArticleController < AppplicationController
def index
@articles = Article.all
respond_to do |format|
format.html
format.csv { send_data CsvExporter.new(@articles) }
end
end
end
Note that Rails is not particularly friendly to dependency injection, so we just instantiate a new object here. Regardless, the new object introduces a seam in the code that makes it easy enough to test:
it 'exports CSV formatted articles' do
allow_any_instance_of(CsvExporter)
.to receive(:call).and_return('csv data')
post :index, params: { format: 'csv' }
expect(response.body).to eql('csv data')
end
Note: I am not arguing against outside-in development. I think it works well — as long as you drop from system tests to unit tests in time. That step is not made any easier by Rails’ removal of default support for controller tests.
The case for testing view templates
The same principles apply to Rails’ view tests, which have gone out of style but offer the same kind of value. The same friction of test-driven development that nudges us to only write system tests and then go on an implementation code bonanza also applies to view code. We all know our view templates should not contain business logic, yet most Rails view templates I come across would easily make 2005-era PHP developers squirm. Why? Because it is just a little too easy to write a bunch of implementation code first, without ever feeling the friction of tests that just don’t seem to fit.
Rails view tests allow us to render an isolated template and make some assertions about it. Granted, the out-of-the-box solution is not great, as it mostly relies on string matching. For example, given a template like this:
<h1>Listing articles</h1>
<% if @articles.any? %>
<% @articles.each do |article| %>
<h2><%= link_to article.title, article_path(article) %></h2>
<% end %>
<% else %>
<p>There are no articles.</p>
<% end %>
We might write a test asserting that we see “There are no articles” when there are no articles yet:
RSpec.describe "articles/index.html.erb", type: :view do
it "shows a placeholder when there are no articles" do
assign(:articles, Article.none)
render
assert_select 'p', text: 'There are no articles'
end
end
Note: although assert_select
works, it’s not great or particularly intuitive. I prefer using Capybara’s smarter matchers:
RSpec.describe "articles/index.html.erb", type: :view do
let(:page) { Capybara.string(rendered) }
it "shows a placeholder when there are no articles" do
assign(:articles, Article.none)
render
expect(page).to have_content("There are no articles")
end
end
It’s not hard to see how you might further test this template:
RSpec.describe "articles/index.html.erb", type: :view do
let(:page) { Capybara.string(rendered) }
it "links to each article" do
assign(
:articles,
[
Article.new(id: 1, title: "One"),
Article.new(id: 2, title: "Two")
]
)
render
expect(page).to have_link("One", href: "/articles/1")
expect(page).to have_link("Two", href: "/articles/2")
end
end
Writing view tests around a rendered template makes it pretty painful to deal with anything other than assigning data and asserting on rendered contents. If you want to do more, you are nudged to write the code you wish you had and introduce seams via helper methods, decorators, view components and non-ActiveRecord models — and then test those.
Conclusion
Note that for both controllers and views, the same principles are at work. If you write all your implementation code based on a handful of system tests, then adding them after the fact feels laborious and of questionable added value. But if you embrace the drop from system tests down to unit tests and introduce seams in your code, then you can reap the benefits of the test-driven approach and let the friction of testing guide you to better software design. As a side effect, you get to reap the benefits of much more intention-revealing, faster and more exact tests.
To embrace unit testing and apply it to Rails views and controllers, you need to do a couple of things:
- Install the required rails-controller-testing gem to at least make it possible to run controller tests.
- Set up a few good file templates, editor snippets and/or code generators to generate the boring boilerplate code quickly.
- Skip your system and integration tests when measuring your code coverage, and then try to keep that measurement at an acceptable level.
Good luck on your journey to introducing more seams, writing better code and less frustrating test suites!