We write about Ruby on Rails, React.js, React Native, remote work, open source, engineering and design.
Rails 6.1 allows environment-specific configuration files to set up Active Storage.
In development, the config/storage/development.yml
file
will take precedence over the config/storage.yml
file.
Similarly, in production, the config/storage/production.yml
file
will take precedence.
If an environment-specific configuration is not present,
Rails will fall back to the configuration declared in config/storage.yml
.
Before Rails 6.1, all storage services were defined in one file,
each environment could set its preferred service in config.active_storage.service
,
and that service would be used for all attachments.
Now we can override the default application-wide storage service for any attachment, like this:
class User < ApplicationModel
has_one_attached :avatar, service: :amazon_s3
end
And we can declare a custom amazon_s3
service in the config/storage.yml
file:
amazon_s3:
service: S3
bucket: "..."
access_key_id: "..."
secret_access_key: "..."
But we are still using the same service for storing avatars in both production and development environments.
To use a separate service per environment, Rails allows the creation of configuration files for each.
Let's change the service to something more generic in the User model:
class User < ApplicationModel
has_one_attached :avatar, service: :store_avatars
end
And add some environment configurations:
For production we'll add config/storage/production.yml
:
store_avatars:
service: S3
bucket: "..."
access_key_id: "..."
secret_access_key: "..."
And for development we'll add config/storage/development.yml
:
store_avatars:
service: Disk
root: <%= Rails.root.join("storage") %>
This will ensure that Rails will store the avatars differently per environment.
Check out the pull request to learn more.
Postgraphile is a great tool for making instant GraphQL from a PostgreSQL database. When I started working with Postgraphile, its authorization part felt a bit different compared to the REST based backends which I had worked with before. Here I will share some differences that I noted.
First, let's see Authentication vs Authorization.
Authentication is determining whether a user is logged in or not. Authorization is then deciding what the users has permission to do or see.
Suppose we have to build a blog application with the below schema.
Display published blogs with is_published = true to all users.
Display unpublished blogs with is_published = false to its creator only.
The REST implementation with JavaScript and sequelize can be like below.
The client requests the blogs using an endpoint, it also attaches the access token received from the authentication service.
const getBlogs = () => requestData({
endpoint: `/api/blogs`,
accessToken: '***'
});
The backend code in the server receives the request, finds the current logged in user from the access token, and requests the data based on the current logged in user from the database.
const userEmail = findEmail(accessToken)
const blogs = await models.Blogs.findAll({
where: { [Op.or]:[
{creatorEmail: userEmail},
{isPublished: true}
]},
});
res.send(blogs);
Here, the backend code finds the user’s email from the access token, then requests the database to give the list of blogs that have creatorEmail matching to the current user's email or the field isPublished is true.
The database will return whatever data the server requests.
Similarly, for creating, editing, and deleting blogs, we can have different end-points to handle the authorization logic in the backend code.
The postgraphile implementation can be like below.
The client requests the blogs using a GraphQL query. It also attaches the access token received from the authentication service.
const data = requestQuery({
query: "allBlogs {
nodes {
content
creatorEmail
visiblityType
}
}"
accessToken: '***'
})
In the server, we configure Postgraphile to pass the user information to the database.
export postgraphile(DATABASE_URL, schemaName, {
pgSettings: (req) => {
const userEmail = findEmail(accessToken);
return({
'current_user_email': userEmail
})
}
})
We can pass a function as Postgraphile’s pg Settings property, whose return value will be accessible from the connected Postgres database by calling the current_setting function.
In the database, the row-level security policies can be defined to control the data access.
Row-level security policies are basically just SQL that either evaluates to true or false. If a policy is created and enabled for a table, that policy will be checked before doing an operation on the table.
create policy blogs_policy_select
on public.blogs for select to users
USING (
isPublished OR
creator_email = current_setting('current_user_email')
);
ALTER TABLE blogs ENABLE ROW LEVEL SECURITY;
Here the policy named blogs_policy_select will be checked before selecting a row in the table public.blogs. A row will be selected only if the isPublished field is true or creator_email matches with the current user's email.
Similarly, for creating, editing, and deleting blogs, we can have row level security policies for INSERT, UPDATE, and DELETE operations on the table.
The REST implementation does the authorization on the server level but the Postgraphile does it on the database level. Each implementation has its own advantages and disadvantages, which is a topic for another day.
PostGraphile
provides sorting on all columns of a table
in a GraqhQL query by default with orderBy
argument.
Although, sorting based on associated table’s columns or adding a custom sort can be acheived via plugins. In this blog we will explore two such plugins.
pg-order-by-related
pluginpg-order-by-related plugin allows us to sort query result based on associated table's columns. It does that by adding enums for all associated table's columns. Here's what we need to do to use this plugin.
npm i @graphile-contrib/pg-order-by-related
const express = require("express");
const { postgraphile } = require("postgraphile");
const PgOrderByRelatedPlugin = require("@graphile-contrib/pg-order-by-related");
const app = express();
app.use(
postgraphile(process.env.DATABASE_URL, "public", {
appendPlugins: [PgOrderByRelatedPlugin],
})
);
orderBy
argumentquery getPostsSortedByUserId {
posts: postsList(orderBy: AUTHOR_BY_USER_ID__NAME_ASC) {
id
title
description
author: authorByUserId {
id
name
}
}
}
pg-order-by-related
plugin
is useful only when we want to sort data based on
first level association.
If we want to apply orderBy
on second level table columns or so,
we have to use
makeAddPgTableOrderByPlugin
.
makeAddPgTableOrderByPlugin
makeAddPgTableOrderByPlugin
allows us to add custom enums that are accessible
on specified table's orderBy
argument.
We can write our custom select queries
using this plugin.
We will use a complex example
to understand the use-case
of custom orderBy
enum.
In our posts list query, we want posts to be sorted by author's address. Address has country, state and city columns. We want list to be sorted by country, state and city in the same order.
Here's how we can achieve this using
makeAddPgTableOrderByPlugin
.
plugins/orderBy/orderByPostAuthorAddress.js
import { makeAddPgTableOrderByPlugin, orderByAscDesc } from "graphile-utils";
export default makeAddPgTableOrderByPlugin(
"public",
"post",
({ pgSql: sql }) => {
const author = sql.identifier(Symbol("author"));
const address = sql.identifier(Symbol("address"));
return orderByAscDesc(
"AUTHOR_BY_USER_ID__ADDRESS_ID__COUNTRY__STATE__CITY",
({ queryBuilder }) => sql.fragment`(
SELECT
CONCAT(
${address}.city,
', ',
${address}.state,
', ',
${address}.country
) AS full_address
FROM public.user as ${author}
JOIN public.address ${address} ON ${author}.address_id = ${address}.id
WHERE ${author}.id = ${queryBuilder.getTableAlias()}.user_id
ORDER BY ${address}.country DESC, ${address}.state DESC, ${address}.city DESC
LIMIT 1
)`
);
}
);
orderBy
pluginsplugins/orderBy/index.js
export { default as orderByPostAuthorAddress } from "./orderByPostAuthorAddress";
orderBy
plugins to postgraphile
const express = require("express");
const { postgraphile } = require("postgraphile");
import * as OrderByPlugins from "./plugins/orderby";
const app = express();
app.use(
postgraphile(process.env.DATABASE_URL, "public", {
appendPlugins: [...Object.values(OrderByPlugins)],
})
);
orderBy
argumentquery getPostsSortedByAddress {
posts: postsList(
orderBy: AUTHOR_BY_USER_ID__ADDRESS_ID__COUNTRY__STATE__CITY
) {
id
title
description
author: authorByUserId {
id
name
address {
id
country
state
city
}
}
}
}
Please head to pg-order-by-related and makeAddPgTableOrderByPlugin pages for detailed documentation.
Before Rails 6.1, we could only traverse the object chain in one direction - from has_many to belongs_to. Now we can traverse the chain bi-directionally.
The inverse_of
option, both in belongs_to
and has_many
is
used to specify the name of the inverse association.
Let's see an example.
class Author < ApplicationRecord
has_many :books, inverse_of: :author
end
class Book < ApplicationRecord
belongs_to :author, inverse_of: :books
end
irb(main):001:0> author = Author.new
irb(main):002:0> book = author.books.build
irb(main):003:0> author == book.author
=> true
In the above code,
first we created the author
and then
a book
instance through the has_many
association.
In line 3,
we traverse the object chain
back to the author using the belongs_to
association method
on the book instance.
irb(main):001:0> book = Book.new
irb(main):002:0> author = book.build_author
irb(main):003:0> author.books
=> #<ActiveRecord::Associations::CollectionProxy []>
In the above case,
we created the book
instance and then
we created the author
instance using
the method added by belongs_to
association.
But when we tried to traverse the object chain
through the has_many
association,
we got an empty collection
instead of one with the book
instance.
The belongs_to
inversing can now be traversed
in the same way as the has_many
inversing.
irb(main):001:0> book = Book.new
irb(main):002:0> author = book.build_author
irb(main):003:0> author.books
=> #<ActiveRecord::Associations::CollectionProxy [#<Book id: nil, author_id: nil, created_at: nil, updated_at: nil>]>
Here we get the collection with the book
instance
instead of an empty collection.
We can also verify using a test.
class InverseTest < ActiveSupport::TestCase
def test_book_inverse_of_author
author = Author.new
book = author.books.build
assert_equal book.author, author
end
def test_author_inverse_of_book
book = Book.new
author = book.build_author
assert_includes author.books, book
end
end
In previous Rails versions, the test cases would fail.
# Running:
.F
Failure:
InverseTest#test_author_inverse_of_book
Expected #<ActiveRecord::Associations::CollectionProxy []> to include #<Book id: nil, author_id: nil, created_at: nil, updated_at: nil>.
Finished in 0.292532s, 6.8369 runs/s, 10.2553 assertions/s.
2 runs, 3 assertions, 1 failures, 0 errors, 0 skips
In Rails 6.1, both the tests will pass.
# Running:
..
Finished in 0.317668s, 6.2959 runs/s, 9.4438 assertions/s.
2 runs, 3 assertions, 0 failures, 0 errors, 0 skips
Check out this pull request for more details.
Rails 6.1 provides additional tasks to work with a specific database when working in a multi database setup.
Before Rails 6.1, only the following tasks worked on a specific database.
But some tasks that could be applied to a specific database were missing. Let's checkout an example.
Before Rails 6.1, running a top level migration on a multi-database project, dumped the schema for all the configured databases, but if a database specific migration was run, the schema was not dumped. And there were no tasks to manually dump the schema of a specific database.
> rails db:schema:dump:primary
rails aborted!
Don't know how to build task `db:schema:dump:primary` (See the list of available tasks with `rails --tasks`)
Did you mean? db:schema:dump
Therefore, in Rails 6.1, the following database specific tasks were introduced.
Check out the pull request for more details.