RubyConf 2024

Published on: December 14, 2024 | Reading Time: 4 min | Last Modified: December 14, 2024

ruby
programming

This November I had the pleasure to attend RubyConf 2024 in Chicago. I have not attended a tech conference in person since 2019 at EmberConf in Portland. It was a great time and a good way to meet people and reinvigorate my interest in Ruby as a performant, beautiful language with a bright future. What was interesting about the conference was the distinct atmosphere of post-Covid social awakening, cautious optimism and apprehension over AI, and soberness over the current state of the market.

One of the keynote presentations involved a play on the hit TV show “Who wants to be a Millionaire?” It was titled “Who wants to be a Ruby engineer?” and the game was played by several pairs of mentors and their mentees. These questions were tricky and esoteric. For example, one question answer relied on the knowledge that in Ruby the triple equals will invoke a Proc:

1
2
3
4
l = -> x { x**2 }

p l === 2
=> 4

From another question, I learned about the flip-flop operator:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
(0..5).each do |x|
  if (x % 5 == 1) .. (x % 5 == 4)
    p "on"
  else
    p "off"
  end
end

# Output:
#
# "off"
# "on"
# "on"
# "on"
# "on"
# "off"

I imagine I hadn’t known about this operator since I don’t really see a lot of use for it, and it may have fallen out of favor long ago. Apparently this came from Perl, which is ironic because my first paying job was programming in Perl!

One of the most head-scratching questions revealed that “sleep” returns an integer - the rounded number of seconds spent in sleep (could be impacted by other threads):

1
2
p sleep(3.6)
=> 3

Why? And why rounded?? A quirk indeed!

A big part of Ruby 3 is the improvements to concurrency, and a few talks I attended helped promote confidence in this in my opinion. One talk by JP Camara overviewed the four concurrency units in Ruby: processes, threads, fibers, and ractors (still experimental). I really liked how it laid out a good mental model of how these units compare to each other, as well as offering some tips and strategies on when to leverage them all for scalability. For traditional processes and threads, use up your cores with processes, and then apply Amdahl’s Law to scale up Ruby threads depending on IO usage (network requests, database calls, even bcrypt, etc). Fibers, along with the new FiberScheduler in Ruby 3, are now good for parallelizing IO in a more deterministic way. Fibers are also a little lighter in terms of memory than threads. They are still not truly parallel outside of IO, though. It seems to me that Ractors, despite their experimental nature, could be a real game changer. Honestly it’s really hard to believe that Ruby has a unit of concurrency that is truly parallel and safe.

Another great talk on concurrency I attended was given by Ivo Anjo of Datadog. The basic overview is that since Ruby threads respect the GVL (Global VM Lock), there’s going to be some latency involved. Therefore, instrumentation can be useful in understanding how long and how many threads are waiting for the GVL. There is a gem for this, as well as a UI to visualize. A couple of numbers to consider are a thread runs for 100ms at a time and the default RUBY_MAX_CPU value is 8 (max number of native threads).

There may be more groundbreaking improvements to Ruby in the near future. According to Matz, plans are apparently in the works to allow for separate namespaces for different versions of gems. It has not been finished yet so we will have to stay tuned!