You are reading articles by Simplificator, a Swiss-based custom software development agency. Here we write about the problems we solve and how we work together.
The Ruby API doc is a great source for information about my programming language of choice. Even after years of writing Ruby code i learn new tricks and features. Lately I've been looking into the Module class in more detail.
I did not know that there is a callback for methods being added to a class. Not that i missed them much or that I even know what I could use them for. Similar exists for removal of methods.
classFoo defself.method_added(method) puts method end
defhello_world end end
# => "hello_world"
Because there is also a callback for methods that are undef'd (no documentation for this method though) I started to wonder what the difference between removing and undefining a method is. Consider the following classes:
classBase defhello_world puts "Hello World from #{self.class.name}" end
defself.method_removed(name) puts "removed #{name} from #{self.class.name}" end
defself.method_undefined(name) puts "undefined #{name} from #{self.class.name}" end end
classUndefined<Base defhello_world puts "Hello World from #{self.class.name}" end
undef_method(:hello_world) end
classRemoved<Base defhello_world puts "Hello World from #{self.class.name}" end
remove_method(:hello_world) end
If you run the code there will be some output from the callbacks:
undefined hello_world from Class removed hello_world from Class
But the interesting part starts when you call those methods:
Removed.new.hello_world # => Hello World from Removed
Undefined.new.hello_world # => undefined method 'hello_world' for #<Undefined:0x007f8dd488a8d8> (NoMethodError)
undef_method prevents the class from responding to a method, even if it is present in a superclass or mixed in module. remove_method only removes it from the current class hence it will still respond to the call if the method is defined in superclass or mixed in.
Something that I've seen in other people's source code already but don't use myself: the ability to pass a list of Strings/Symbols to the visibility modifiers such as private, public and protected:
classFoo defa_method end private(:a_method) end
Foo.new.a_method
# => private method 'a_method' called for #<Foo:0x007fb169861c90> (NoMethodError)
Note that those visibility modifiers are methods and not part of the language syntax. This is different from other languages like Java where public/private/protected are language keywords (and no modifier is also supported and leads to default visibility).
Actually i prefer the Java syntax over the ruby one: having the visibility part of the method signature makes it easy to spot what visibility a method has. Especially for long classes this might be difficult in Ruby. It is actually possible to have a similar style in ruby. Ruby allows to write multiple statements on one line as long as they are separated by a semicolon:
classFoo private;defhello_world puts "hello world" end end
Looks awkward and modifies the visibility for allfollowing methods as well.
For newer Rubies (2.1+) you can omit the semicolon as def is not void anymore but returns a Symbol (the method name):
classFoo privatedefhello_world puts "hello world" end end
(Thanks to Thomas Ritter for the hint.)
Now lets look at how you would make a private class method:
classFoo privatedefself.hello_world puts "hello World" end end
You would expect hello_world to be private, right? Not exactly: you can still call it:
classFoo defself.hello_world puts "hello World" end private_class_method :hello_world end
Note that confusingly private_class_method does not set the visibility for class methods following that call like private does. You need to pass the method name as an argument!
So I stick to grouping methods by visibility and write small classes to make sure I don't lose track of what visibility the methods are in.
Learned something new today? Then go pick a class of ruby core and read up on it on the API doc. Chances are you are learning something new.
Rake tasks are a convenient method to automate repeating tasks and also make them available via the command line. Oftentimes these tasks can be executed without any user input. Think of a built-in task like "db:migrate" -- it does not take any arguments. There's other tasks that in fact take arguments. Usually, they work like this: rake the_namespace:the_task[arg1,arg2].
If you look for a solution to rake tasks with arguments, you often find this code snippet:
namespace :utilsdo task :my_task,[:arg1,:arg2]do|t, args| puts "Args were: #{args}" end end
This code snippet, however, does not load your Rails environment. So you cannot load any models for example.
A solution to this problem looks like this:
namespace :utilsdo desc 'Unlocks this user. Usage: utils:unlock_user USER=42' task :unlock_user=>:environmentdo|t, args| user_id =ENV['USER'].to_i puts "Loading user with id = #{user_id}"
user =User.find(user_id) user.unlock! end end
You call this rake task with rake utils:unlock_user USER=42. By specifying USER=42 you load this argument into the environment variables.
There is, however, a more standard way of implementing this.
namespace :utilsdo desc 'Unlocks this user. Usage: utils:unlock_user[42] for the user ID 42' task :unlock_user,[:user_id]=>:environmentdo|task, args| user_id = args.user_id puts "Loading user with id = #{user_id}"
user =User.find(user_id) user.unlock! end end
There we go, we now have a rake task with arguments in brackets. If you want to have more arguments, you simply add them to the arguments list after the task name and retrieve it in the args object by its name.
Which variant of rake task you prefer is up to you. The first one with the explicit environment variable is probably easier to read, the second variant is more in line with standard rake.
The Enumerable module gives you methods to search, iterate, traverse and sort elements of a collection. All you need to to is to implement each and include the mixin.
classNameList includeEnumerable
definitialize(*names) @names= names end
defeach @names.each{|name|yield(name)} end end
list =NameList.new('Kaiser Chiefs','Muse','Beck')
list.each_with_index do|name, index| puts "#{index +1}: #{name}" end
# => 1: Kaiser Chiefs # => 2: Muse # => 3: Beck
So by defining each and including Enumerable we got an impressive list of methods that we can call on our NameList instance. But having all those methods added does not feel right. Usually you will use one or two of them. Why clutter the interface by adding 50+ methods that you'll never use? There is an easy solution for this:
classNameList definitialize(*names) @names= names end
defeach return@names.to_enum(:each)unless block_given? @names.each{|name|yield(name)} end end
list =NameList.new('Kaiser Chiefs','Muse','Beck')
list.eachdo|name| puts name end # => Kaiser Chiefs # => Muse # => Beck
Note that each now returns the Enumerator on which you can call each_with_index (or any of the other methods) unless a block is given. So you can even call it like this:
puts list.each.to_a.size # => 3
By returning an Enumerator when no block is given one can chain enumerator methods. Ever wanted to do a each_with_index on a hash? There you go:
points ={mushroom:10, coin:12, flower:4}
points.each.each_with_index do|key_value_pair, index| puts "#{index +1}: #{key_value_pair}" end
If you have been programming ruby for a while then you have seen the splat operator. It can be used to define methods that accept a variable length argument list like so:
defsingle_splat(an_argument,*rest) puts "#{rest.size} additional argument(s)" end
But now back to the double splat operator. It has been added to Ruby in version 2.0 and behaves similarly to the single splat operator but for hashes in argument lists:
defdouble_splat(**hash) p hash end
double_splat() # => {}
double_splat(a:1) # => {:a => 1}
double_splat(a:1, b:2) # => {:a => 1, :b => 2}
double_splat('a non hash argument') # => `double_splat': wrong number of arguments (1 for 0) (ArgumentError) # (The message for the case where I pass in a non-hash argument is not very helpful I'd say)
"What!" I can hear you shout. Where is the difference to a standard argument. In the use case as shown above it is pretty much the same. But you would be able to pass in nil values or non hash values, so more checks would be required:
defstandard_argument(hash ={}) puts hash end
standard_argument() # => {}
standard_argument(nil) # =>
Now if we move this to a more realistic use case, consider a method taking a variable list of arguments AND some options:
defextracted_options(*names,**options) puts "#{names} / #{options}" end
Ruby on Rails developers might know this pattern already. It is used in various parts of the framework. It is so common that the functionality has been defined in extract_options!
Rails offers multiple ways to deal with exceptions and depending on what you want to achieve you can pick either of those solutions. Let me walk you through the possibilities.
begin/rescue block
begin/rescue blocks are the standard ruby mechanism to deal with exceptions. It might look like this:
begin do_something rescue handle_exception end
This works nice for exceptions that might happen in your code. But what if you want to rescue every occurrence of a specific exception, say a NoPermissionError which might be raised from your security layer? Clearly you do not want to add a begin/rescue block in all your actions just to render an error message, right?
Around filter
An around filter could be used to catch all those exceptions of a given class. Honestly I haven't used a before filter for this, this idea came to my mind when writing this blog post.
private defhandle_exceptions begin yield rescueNoPermissionError redirect_to 'permission_error' end end end
rescue_from
rescue_from gives you the same possibilities as the around filter. It's just shorter and easier to read and if the framework offers a convenient way, then why not use it. There are multiple ways to define a handler for an exception, for a short and sweet handler I prefer the block syntax:
classApplicationController<ActionController::Base rescue_from 'NoPermissionError'do|exception| redirect_to 'permission_error' end end
exceptions_app
There is an additional feature (added in Rails 3.2) that allows to handle exceptions. You can specify an exceptions_app which is used to handle errors. You can use your own Rails app for this:
config.exceptions_app =self.routes
If you do so, then your routing must be configured to match error codes like so:
match '/404', to:'exceptions#handle_404'
Alternatively you can specify a lambda which receives the whole Rack env:
config.exceptions_app = lambda do|env| # do something end
Do you wonder how you can call an arbitrary action when you have the env? It's pretty easy:
Our new laptop sleeves arrived. Every employee picked two of the five values that Simplificator stands for. Now we have nice and colorful sleeves and they convey our message.
We are using cancancan as an authorization gem for one of our applications. To make sure that our authorization rules are correct, we unit-tested the Ability object. In the beginning, the test was quite fast, but the more rules we added, the longer it took to run the whole model test. When we analyzed what was slowing down our test, we saw that quite some time is actually used persisting our models to the database with factory_girl as part of the test setup. It took a bit more than 60 seconds to run the whole ability spec, which is far too much for a model test.
Let's look at an excerpt of our ability and its spec:
# ability.rb
defacceptance_modes can [:read],AcceptanceMode if@user.admin? can [:create,:update],AcceptanceMode can :destroy,AcceptanceModedo|acceptance_mode| acceptance_mode.policies.empty? end end end
before(:each)do create(:policy,:acceptance_mode=> acceptance_mode) end
[:read,:create,:update].eachdo|action| it { should be_able_to(action, acceptance_mode)} end
it { should_not be_able_to(:destroy, acceptance_mode)}
end end
# ability_matcher.rb
moduleAbilityHelper extendRSpec::Matchers::DSL
matcher :be_able_todo|action, object| match do|ability| ability.can?(action, object) end
description do "be able to #{action} -- #{object.class.name}" end
failure_message do|ability| "expected #{ability.class.name} to be able to #{action} -- #{object.class.name}" end
failure_message_when_negated do|ability| "expected #{ability.class.name} NOT to be able to #{action} -- #{object.class.name}" end end end
RSpec.configure do|config| config.includeAbilityHelper end
We first set up a user -- in this case it's an admin user -- and then initialize our ability object with this user. We further have a model called AcceptanceMode, which offers the usual CRUD operations. An acceptance mode has many policies. If any policy is attached to an acceptance mode, we don't want to allow it to be deleted.
Note that a lot of models are created, meaning these are persisted to the database. In this excerpt, we have 4 test cases. Each of these test cases needs to create the admin user, acceptance mode and also create a policy. This is a lot of persisted models, even more so if you realize that this is not all the acceptance mode specs and acceptance mode specs are only a small fraction of the whole ability spec. Other models are even more complex and require more tests for other dependencies.
But is this really necessary? Do we really need to persist the models or could we work with in-memory versions of these?
[:read,:create,:update].eachdo|action| it { should be_able_to(action, acceptance_mode)} end
it { should_not be_able_to(:destroy, acceptance_mode)}
end end
Note that all the create calls are replaced with build. We actually don't need the models to be persisted to the database. The ability mainly checks if the user has admin rights (with admin?), which can be tested with an in-memory version of a user. Further, the acceptance mode can be built with an array that contains an in-memory stub policy. If you look closely at the Ability implementation, you will see that that's not even necessary. Any object could reside in the array and the spec would still pass. But we decided to use an in-memory policy nonetheless.
With this approach, no model is persisted to the database. All models are in-memory but still collaborate the same way as they would have when loaded from the database first. However, no time is wasted on the database. The whole ability spec run time was reduced from 60 seconds to 5 seconds, by simply avoiding to persist models to the database in the test setup.
As an aside: there's a lot of discussions around the topic of factories and fixtures. Fixtures load a fixed set of data into the database at the start of the test suite, which avoids these kinds of problems entirely.
That's it. We hope you can re-visit some of your slow unit tests and try to use in-memory models, or avoid persisting your models for the next unit test you write!
Apps are software products with human interfaces. Web sites are that, too. The WYSIWYG dream of the 90s was telling people all they need to do is buy DreamWeaver and they’ll be able to build and be the same success as Amazon. This was nonsense.