### Sven Gehring's Blog

I write about software, engineering and stupidly fun side projects.

# There is no magic in Elixir!

If you’re anything like me, you probably started to learn Elixir and wanted to skip to the shiny stuff right away. Sure there’s some basics to sift through like the data types and specific syntax elements but after that, we can finally build a distributed, scalable, performant masterpiece of an application! - Riiigt? Granted, even if you’re a bit more of a sane person, once you get to work with Supervisor, GenServer, Agent and other modules alike, you can’t help but feel that things have been simplified a lot for you. Continue reading

# Serving static assets on a subpath in Phoenix

If you create a new Phoenix project, without using the --no-html flag, a static plug will be added to your endpoint. Because of this, a lot of people recommend to just edit that, if you want to serve static files from a subdirectory. However, this can get a bit tricky if you have data stored in different directories - or use Phoenix purely as an API.

You don’t have to edit the endpoint, though, you can just use Plug.Static in a (sub)scope in your controller

I remembered I did get this working once but wasn’t quite sure how anymore, so I wanted to quickly test if my answer was correct…. and there was some caveats to it. I haven’t found a comprehensive guide on how to do this, so here’s what I learned.

This article assumes you already have a Phoenix project up and running, however, if you don’t, you can create one with mix phx.new --no-ecto --no-webpack phxstatic - leaving out the database and frontend JS components for the sake of simplicity. For testing, we add a file at priv/test/hello.txt that contains hello world.

For serving static assets, we need to add a pipeline with the Plug.Static plug to our router. We will also use that pipeline in the scope where we want to serve those files, using the pipe_through macro.

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15  defmodule PhxstaticWeb.Router do use PhxstaticWeb, :router pipeline :static do plug Plug.Static, at: "/static", from: {:phxstatic, "priv/test"} end scope "/", PhxstaticWeb do scope "/static" do pipe_through :static end end end

Caveat 1: Even though the plug is nested in the /static/ scope, the :at option has to be set to the full path!
Caveat 2: It doesn’t do anything yet, now that’s unfortunate.

## Making it work

Caveat 3: Now this is where I got stuck the first time. Everything looks right but why does it just display the phoenix 404 page? The reason for this is, that a pipeline is only invoked, once a route in the scope that uses it matches, as explained by José in this post.

So the solution is reasonably simple, we just add a catchall route to the respective scope. I am not including the source code of ErrorController.notfound here, since you can use any controller/function here for rendering a 404.

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16  defmodule PhxstaticWeb.Router do use PhxstaticWeb, :router pipeline :static do plug Plug.Static, at: "/static", from: {:phxstatic, "priv/test"} end scope "/", PhxstaticWeb do scope "/static" do pipe_through :static get "/*path", ErrorController, :notfound end end end

If we now open localhost:4000/static/hello.txt, we get hello world - yay! And if we try some other path in that scope, we will just get the response rendered by ErrorController.notfound.

# Phoenix end-to-end testing in real life

There are lots of articles on testing in Elixir, and probably ten times as many for each Javascript frontend framework. But what if we want to test all of it together? - Be advised that end-to-end tests do not replace unit and integration tests on either the backend or frontend, however, I think they do have their place in a good test suite for the following reasons:

• We can test for specific user workflows that caused issues in the past to ensure these stay fixed forever.
• Tests are almost entirely reduced to user interaction, so given a working tests setup, tests can almost entirely be defined by QA without knowledge of the software’s inner workings.
• Sometimes we just forget to test little caveats when interacting with the API, this way, we catch them.
• It looks so cool to watch cypress run, like, seriously!

## What we are going to do

We use cypress for simulating user interaction on the frontend We create a phoenix router plug that is only accessible in the :test env We isolate our connections using Ecto’s sandbox adapter and cypress hooks We expose our factories with a simple JSON API

## Preparing Phoenix

#### Making sure the webserver is running

For using this kind of tests, we need to make sure our backend is running even in test mode. Phoenix does not do this by default but we can fix that by changing the server configuration in config/test.exs.

 1 2  config :myapp, MyApp.Endpoint, server: true

If you have not dockerized your application, this might cause issues if you attempt to run the test suite more than once (e.g. in two different CI tasks). In that case you might want to create a separate environment like :fulltest and leave the normal :test environment alone.

#### Using an end-to-end plug

In our router lib/myapp_web/router.ex, we add a forward directive that is only used in our end-to-end test environment. This will forward all requests to /end-to-end/* to that plug.

 1 2 3  if @env == :test do forward("end-to-end", MyApp.Plug.TestEndToEnd) end

For now, let’s set up some scaffolding for this plug, which we will complete later on.

 1 2 3 4 5 6 7 8  defmodule MyApp.Plug.TestEndToEnd do use Plug.Router plug :match plug :dispatch match _, do: send_resp(conn, 404, "not found") end

## Isolating database access

(This is written for Ecto 2.2.8, I think the API is vastly different in Ecto 3.x)
When running tests within your Phoenix application, the test suite is already doing quite a few interesting things for you. All of the details are documented very well in the Ecto.Adapters.SQL.Sandbox module documentation. Essentially, what you need to know is, that when you run your Phoenix controller tests with mix test, each test is run in a separate process by default (this is what ex_unit does to run tests concurrently). Each test process then checks out a database connection, makes some changes and checks that connection back in when the test is done. This way, changes made within that test are never actually persisted to the database.

Ecto’s sandbox pool is based on an ownership mechanism, which means that the process checking out a connection is the only one that can access it. You can however allow other processes to use a connection or run the sandbox in shared mode, which means all processes will have access to a connection. The downside of using shared mode is, that tests can no longer be run concurrently, which is not ideal but it’s a tradeoff we are willing to take for this example. It is not impossible to do all of this in parallel, however, it would require some more advanced logic. (Your Phoenix tests will not be affected by this)

We create two functions in our end-to-end test plug, one for checking out a connection and setting it to shared mode and one for checking the connection back in. ownership_timeout is set to :infinity, which means the connection will remain checked out until we manually check it back in.

 1 2 3 4 5 6 7 8  defp checkout_shared_db_conn do :ok = Ecto.Adapters.SQL.Sandbox.checkout(Repo, ownership_timeout: :infinity) :ok = Ecto.Adapters.SQL.Sandbox.mode(Repo, {:shared, self()}) end defp checkin_shared_db_conn(_) do :ok = Ecto.Adapters.SQL.Sandbox.checkin(Repo) end

## Checking connections in and out via API

You might have noticed that checkout_shared_db_conn/0 is of arity 0, while checkin_shared_db_conn/1 takes an argument but doesn’t care about it. Weird. The reason for this is, that connections are bound to their owner. If the owner process exits, the connection is closed. If you want to check connections in and out via API, this is terrible, since each request will spawn a process that is discarded once the reply is sent.

We add two routes to our plug, which allow us to check-in or check-out database connections. For solving the ownership problem, we spawn an agent when checking out a database connection, that is only terminated once we check the connection back in. This way, the connection stays open for API requests.

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24  post "/db/checkout" do # If the agent is registered and alive, a db connection is checked out already # Otherwise, we spawn the agent and let it(!) check out the db connection owner_process = Process.whereis(:db_owner_agent) if owner_process && Process.alive?(owner_process) do send_resp(conn, 200, "connection has already been checked out") else {:ok, _pid} = Agent.start_link(&checkout_shared_db_conn/0, name: :db_owner_agent) send_resp(conn, 200, "checked out database connection") end end post "/db/checkin" do # If the agent is registered and alive, we check the connection back in # Otherwise, no connection has been checked out, we ignore this owner_process = Process.whereis(:db_owner_agent) if owner_process && Process.alive?(owner_process) do Agent.get(owner_process, &checkin_shared_db_conn/1) Agent.stop(owner_process) send_resp(conn, 200, "checked in database connection") else send_resp(conn, 200, "connection has already been checked back in") end end

So, checkin_shared_db_conn/1 needs to take one argument, since it is expected to be an Agent getter that should care about the Agent’s state. We can now send a POST request to /end-to-end/db/checkout, make some changes and then send another POST request to /end-to-end/db/checkin, and the changes will be gone… Hooray!

## Exposing our factory

When setting up tests, we usually need to insert some data before our application is in the desired test state. We use thoughtbot/ex_machina factories for that purpose. If you use another way of mocking your test data, you will have to come up with a solution on your own but for ex_machina, this is what we are using. There are careless atom-conversions in this code but since it is only used for testing, that should be fine.

  1 2 3 4 5 6 7 8 9 10 11 12 13 14  post "/db/factory" do # When piped through a generic Phoenix JSON API pipeline, using a route # like this allows you to call your factory via your test API easily. with {:ok, schema} <- Map.fetch(conn.body_params, "schema"), {:ok, attrs} <- Map.fetch(conn.body_params, "attributes") do db_schema = String.to_atom(schema) db_attrs = Enum.map(attrs, fn {k, v} -> {String.to_atom(k), v} end) db_entry = Factory.insert(db_schema, db_attrs) send_resp(conn, 200, Poison.encode!(%{id: db_entry.id})) else _ -> send_resp(conn, 401, "schema or attributes missing") end end

## Configuring Cypress

I trust you will be able to set up cypress on your own, their documentation is pretty amazing. Once you have cypress installed and running on your frontend, we need to make some changes to /cypress/support/index.js. The hooks we add here are global and will affect every test we write. Don’t worry if the commands looks strange to you, we will define them in the next section.

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19  before(() => { // Before we run any tests, we reset our database. // We also check back in any open database connection. If you save a // test file, cypress will re-run the tests, not finishing the ones it // is currently running, so we might end up with a checked-out connection // lying around and blocking our database reset. cy.checkindb() cy.resetdb() }) beforeEach(() => { // Before each test, we check out a database connection cy.checkoutdb() }) afterEach(() => { // After each test, we check the database connection back in cy.checkindb() }) 

Alright, now let’s look at these commands. We will have to add them manually in /cypress/support/commands.js, using Cypress.Commands.add().

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19  Cypress.Commands.add("resetdb", () => { cy.exec('docker-compose run myapp mix do ecto.drop, ecto.create, ecto.migrate') }) Cypress.Commands.add("checkoutdb", () => { cy.request('POST', '/api/end-to-end/db/checkout').as('checkoutDb') }) Cypress.Commands.add("checkindb", () => { cy.request('POST', '/api/end-to-end/db/checkin').as('checkinDb') }) Cypress.Commands.add("factorydb", (schema, attrs) => { cy.log(Creating a \${schema} via fullstack factory) cy.request('POST', '/api/end-to-end/db/factory', { schema: schema, attributes: attrs }).as('factoryDb') }) 

That’s it! Our database will be reset when our test suite is started. A connection will be checked out before every test and checked back in after every test, much like with out Phoenix integration tests. Now we can write cypress tests completely independent from each other, each starting with a clean application state. Here’s a simple example:

  1 2 3 4 5 6 7 8 9 10 11 12 13  it ("shows email not confirmed notification in dashboard for unconfirmed user", () => { cy.factorydb('user', { username: "Sven Gehring", password: "password", email: "cbrxde@gmail.com" }) // This is another custom command we use for getting our auth token cy.login('cbrxde@gmail.com', 'password') cy.visit('/dashboard') cy.get('#kc-content').find('[data-cy=email-confirm-alert]').should('exist') }) 

# Elixir: Testing protected Phoenix controllers

Testing protected endpoints in Phoenix controllers is a topic that sparks confusion - at best, and controversy at worst - amongst a surprising lot of people. When using Guardian or other pluggable ways of authorizing requests, this behaviour has to be taken into consideration for controller tests. Multiple pull requests in the Guardian repository were working towards a solution for this, Guardian Backdoor, which has now been moved into its own repository. Continue reading

A few weeks ago, I decided to build a little tool for my Synology Download Station, since I was not quite satisfied with the features DS Download offered me for simple monitoring. However, while testing out the API as it was documented in official API docs, I stumbled upon some rather unusual behaviour, that allows you to access details of any download task, even if it does not belong to the user you’re authenticated as. Continue reading