My personal blog hauleth.dev
blog
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

Fix links and typos in article about Elixir perf

+12 -4
+12 -4
content/post/things-about-elixir-you-probably-will-never-need/index.md
··· 15 15 "beam", 16 16 "performance" 17 17 ] 18 + 19 + [[extra.thanks]] 20 + name = "Angry Clippy (@ze.du on Discord)" 21 + why = "Redaction" 18 22 +++ 19 23 20 24 In my last larger gig I worked on fascinating project - [Postgres connection ··· 37 41 I can. This project now lives as [Ultravisor][] - it is still nowhere near being 38 42 done in a way that I like, but I still go back to work on it from time to time 39 43 to find potential performance improvements. 44 + 45 + [Ultravisor]: https://github.com/Ultravisor/ultravisor 40 46 41 47 This is a story of things that I have done and learned during that journey. 42 48 ··· 74 80 times and then running `cat *.bggg` to concatenate all files into larger trace. 75 81 That has disadvantages, but at least it was workable within [Speedoscope][] 76 82 which I also highly recommend to anyone who needs to work on such optimisation. 83 + 84 + [Speedoscope]: https://speedoscope.app 77 85 78 86 While flame graphs are awesome, there is cost to gathering them with eFlambè - 79 87 it greatly affects performance. Fortunately Erlang has some built in tools that ··· 171 179 [Telemetry]: https://github.com/beam-telemetry/telemetry 172 180 173 181 In this project the metrics are exposed in Prometheus/OpenMetrics format, which 174 - mean that there need to be collection system within application. In BEAM 182 + means that there needs to be collection system within the application. In BEAM 175 183 applications the standard way to implement that is to use ETS tables to store 176 184 recorded values. Fortunately there are libraries to handle that for you, and for 177 185 the longest time "gold standard" for it was `telemetry_prometheus_core` library ··· 285 293 286 294 ## Lesson: Calling your `GenServer`s is fast, but not 90k times per second fast 287 295 288 - One of the interesting observations is that I have spotted is that if there are 289 - longer running queries, one that send more data over the network than just 290 - simple short response, then the difference between Ultravisor and "state of the 296 + One of the interesting observations that I have spotted is that if there are 297 + longer running queries, ones that send more data over the network than just 298 + simple short responses, then the difference between Ultravisor and "state of the 291 299 art" tools like [PgBouncer][] or [PgDog][] (that are written in non-managed 292 300 languages like C and Rust) is much smaller (obviously it is still there, but it 293 301 is on par, not substantially off).