Skip to content


This page exposes the technical skills I have acquired throughout my career, with varying degrees of experience, with some details about how I used them.

I tried putting here all the skills I can remember of, which might be relevant for software engineering projects. I will probably miss some of them though.


In general, I had a number of different leadership experiences.

Tech Lead

I've been a tech lead at a number of different companies: GoNow, Dafiti, YouGov and Yalo. It's my favorite kind of leadership as it mixes caring about people but in a context of helping them grow technically.


I have Engineering Management experience from working at Yalo.


I learned Scrum when working at UOL, and then used it in some other companies too. I was also responsible for implementing it at my team at Nokia Siemens Networks, where I ended up being a scrum master besides being a Python developer there.

Programming & other languages

Adobe ActionScript

I used all versions of ActionScript when it still existed, from 1 to 3, to build more elaborate interactions in Flash. This was between 2000 and 2007, approximately.

Bash / shell scripting

I've been using Bash and shell scripting in general since I started using Linux, which was around 2007. I tend to use these scripts for simpler things, but when it starts getting too complicated I just rewrite them in Python, which is much more powerful as a scripting language.

C / C++

I actually started learning to program in C and C++, they were my very first languages, but I didn't go very far with them. My idea was to learn them to build digital audio software. My exposure to them is minimal, and too long ago for me to remember. This was at 2003.


I've been using CSS since I started building websites, which was around 2000, even before I learned to program for real (with PHP). At that time I used Macromedia Dreamweaver to help me embedding styles in HTML through the UI tool.


I used Cython in a few systems at YouGov, between 2014 and 2022.


I only really experimented with the language, around 2008 and 2009 or so, and although I really like it, I never found a way to use it professionally. It would be nice if I could, though, at some point.

Functional programming

I had marginal contact with functional programming when I started learning Erlang. But never used it in production. This was around 2008 or 2009, I guess.

Go (golang)

My first contact with Go was when I worked in Ubuntu Push, at Canonical, back in 2014. It was only for a few weeks though, and the project was a bit too turbulent for me to hold fond memories for the language. I only got back to learning the language in 2022, and now I really like it - it's an impressive language and platform, especially for highly distributed systems and ease of development without sacrificing too much performance.

I ended up using it a bit more at Yalo.


I've been messing with HTML since 2000, and have done it in all (or almost all) of the products I worked in.


I started learning Java in around 2007 or 2008, don't remember, but I really started using it in production when I worked as a frontend engineer at UOL - I somehow managed to convince the team to allow me to do server-side stuff there too, which was super nice of them. I don't remember exactly the stack there, but I think it was Jetty, Spring, Hybernate and Velocity. And it was jQuery for the frontend.


From 2000 to 2003 I did some JS stuff that I'm glad it disappeared. Because it was only when I really started learning how to program for real (and not just copy-and-paste), in 2003, that I started understanding how JS works. Been using it ever since. (I miss the old days of JS, by the way.)


I've started learning Kotlin in 2023, so at the time of this writing it's still a very new thing to me. I really like it though, not only because it's way easier to develop than Java, but also because of its ease of use on many devices.

Object-Oriented Programming (OOP)

I've learned OOP a bit later in my career, I think around 2006 or so. My first years with PHP were structuring code procedurally - even because OOP support in PHP 4 was not very good if I remember correctly. When I started reading about OOP principles, design patterns and other stuff, though, I saw the real value of it, and it's been my main programming style ever since.


This was the first language I learned for real, to the end. I started with it in 2003, and used it until 2007 as a freelancer, when I finally started using it when working for companies. It's a nice language, but not my favorite, and I wouldn't create a new project from scratch using it unless there would be a very compelling reason to do so. (I would prefer to use Python instead, if possible.)


My favorite language, been using it since 2007 (when I started studying it), but professionally since 2009 or so with marginal tools, and since 2012 as my main language. I know it has its limitations (I'm looking at you, GIL!), but I still love it anyway, even though I wouldn't use it in some projects depending on the needs.

Ruby / Ruby on Rails

I learned Ruby and RoR during the year 2007 and used it at Vista Publicidade and then at GoNow. Not my favorite, but it's a nice platform.


Rust is currently my second favorite programming language. I've been learning it for a while, but with very few opportunities to use it for real in production. I hope to get more projects to use it in the future, it's a revolutionary language!


Been using SQL since 2003. It's my tool of choice when I have to... do queries to SQL databases. :-P


I started learning TypeScript back at YouGov, but only had marginal exposure to it. Wouldn't say I'm any experienced with it, just have a bare notion of what it is, as a superset of JavaScript with type checking (that unfortunately not every JS dev follows strictly).


I'm glad I haven't had to touch XML in a while, and I hope this goes away in a not too distant future (though I doubt this will happen any soon - like, in the next few decades). I had to work with it in a number of different situations, since around the time when I started to program, up to integrating with SOAP services and stuff like that. Not fun. Anyway, I'd say I used it more intensively between 2007 and 2011 or so - can't remember exactly anymore.


I used XSD at UOL (don't remember what for) and at Dafiti to do some XML schema validations. Talk about something created to be not fun. I'd say I used it in 2008 and 2011, maybe.



I used this library to build the API service for the new data backend for BrandIndex, at YouGov. This was between 2021 and 2022. When I left the company the project was still a prototype.


I used asyncio first in the legacy BrandIndex Admin system, which ran in Tornado, and then in the new data backend I was developing for BrandIndex. This was between 2014 and 2022.


I used Cake to build small websites in part of my career, approximately between 2003 and 2008.


I used CherryPy as a framework for a number of different systems at YouGov. This was between 2014 and 2022.


I first started using Django in around 2008 or so, when it was still in very early versions. I kept using it at Nokia Siemens Networks, Canonical and then YouGov.


I experimented with FastAPI as the framework for the new data backend for BrandIndex, in around 2021. But it showed to be significantly slower than aiohttp in my cases, so I opted to ditch it. It's a really impressive piece of technology though.


My first contact with Flask was in one of the applications that were part of the customer-facing SSO in YouGov, in around 2014. Then, we decided to use it as the building ground for the new BrandIndex we created and launched in 2018. We used it with gevent and uWSGI.


I don't remember exactly when I started using jQuery, I think it was at 2006 or so. My big exposure to it started in 2008 though, when I worked as a frontend dev for UOL. I even started creating some libraries on top of it, and ended up helping the jQuery devs to improve the performance for the "contains" method, if I remember correctly. As much as I think there are more modern frameworks nowadays, I think jQuery suffered from a lot of unfair criticism from developers making super messy code with it and blaming the library instead of their own code. I have fond memories of it.


I only have marginal exposure to it - we did use it a lot in BrandIndex, but at a point where I was not being involved in the frontend anymore. So I have an idea about how to structure projects in React (components, state etc), and about how JSX works, but I wouldn't say I'm experienced in it. was something I learned together with jQuery, or close to it, can't remember exactly anymore. It was at around 2007 or so. I think I learned it together with Ruby on Rails maybe. Anyway, it's dead tech now, fortunately.


I first started using Symfony as a freelancer, when I discovered it was a much superior framework than CakePHP. But that was at a time that Symfony was super new and not much evolved - I think it wasn't even broken into components yet. I want so say I used it between 2007 and 2008, or something like that.


Used it a bit in toy projects first, then for real in the legacy BrandIndex Admin, from 2014 to 2022. Not sure if I would pick it for a new project though.


I only had real exposure to it when working with it at Canonical, between 2012 and 2014. I'm surprised that it still exists - and is very much used.

Zend Framework

I first, and only, used Zend Framework when working at the backend system for Dafiti. I remember it being modular and flexible, and not too annoying to work with - although I liked Symfony more. This was between 2011 and 2012.

Database systems and other data stores

Apache Pinot

I used Pinot at YouGov, in BrandIndex, first as the replacement database for the internal usage analytics feature (it proved to be much more performant than the other options we researched), and then as the analytics database for the new data backend for BrandIndex (which was still a prototype when I left). This was between 2021 and 2022. This is my favorite OLAP database.

I'm also a contributor to the python-pinotdb driver.

Apache Cassandra

I used this database at Canonical, more specifically for Ubuntu One (now defunct). This was between 2012 and 2014.

I experimented with this database at YouGov, around 2021, as a replacement database for the internal analytics system for BrandIndex. I decided not to use it though, because of performance.

Apache Druid

I experimented with this database at YouGov, around 2021, as a replacement database for the internal analytics system for BrandIndex. I decided not to use it though, because of performance.


I used Carbon together with Graphite as the first system metrics solution for BrandIndex, between around 2015 and 2017 or so.


I experimented with this database at YouGov, around 2021, as a replacement database for the internal analytics system for BrandIndex. I decided not to use it though, because of performance.


I first started using InfluxDB as a custom system metrics database for BrandIndex, around 2016. And kept using it ever since. I'm a big fan of it, although I know it has limitations (watch out for tag cardinality!).


I can't remember when I first started using it, I think it was in 2003 or so, but I used it a lot for simple caching for small applications. For many years I've been preferring to use Redis, though, which is far superior IMO.


Most of my work with MongoDB was in the old BrandIndex, at YouGov. But it was not the appropriate database for our cases there, so in the new system we started using PostgreSQL instead. I really like MongoDB, but I'm not a fan of how some projects use it when an RDBMS would be more appropriate.


This was my first database, I started learning it together with PHP, back in 2003. It's a really nice database, very fast, but I prefer PostgreSQL - I don't have fond memories of MySQL truncating my VARCHAR fields without errors. I used it up to 2012.


My favorite RDBMS. It's fast, consistent, extensible, open source. What's not to like in it, right?! Been using it since I don't know when - maybe 2008, I guess, when I developed my first Django website (my personal one).


I haven't written any PromQL query for real, just used Prometheus to grab Kubernetes metrics in Grafana. But it's a database I would like to use more, and learn more about PromQL.


I first started using RabbitMQ when I worked on Ubuntu One, at Canonical. I remember moving to using topic exchanges in order to have better distribution of messages to process media files in that product - this allowed faster ingestion of small files, like text ones, while not impacting ingestion of larger ones, like video files. I'm a big fan of RabbitMQ, and I think it gets unfair criticism from people not knowing how to configure it properly (e.g. regarding durable queues etc). In my experience, having a well-configured RabbitMQ instance makes it super reliable and robust.


My favorite KV store - even though I would experiment others nowadays, as there are some promising alternatives out there. It's an underused piece of technology though - it's much more powerful than just being a cache store -, and I wish I could use more of its features.


I used S3 in YouGov for a number of different things, mostly for BrandIndex: data backups, Apache Pinot deep store, JSON files of all sorts. I used it there for about 6 years or so.


I experimented with this database at YouGov, around 2021, as a replacement database for the internal analytics system for BrandIndex. I decided not to use it though, because of performance.


Apache ZooKeeper

I used ZooKeeper as part of the stack for Pinot when working at YouGov. This was between 2021 and 2022.

Argo CD

I use Argo CD at Yalo as a facilitation layer to manage our deployments there.


I used buildbot as a Continuous Integration and Continuous Deployment system for a project I started (a payment gateway) but which never took off. The idea was that every commit to the master branch ran the CI pipeline and automatically deployed to production. I really like it, but nowadays there are far more advanced tools than that.


I've been using CI/CD tools for many years, I guess since 2008 or so. But only in very few projects I had the chance to do true Continuous Integration (which should be done avoiding branches, committing directly to "trunk", to have faster feedbacks of changes).


My first exposure to Docker was at YouGov, around 2017 or so, as part of the transition of our systems to Kubernetes. I've been using it since, to run local stuff.


I started using docker-compose around 2017, a bit after I started learning pure Docker. We decided to use it for most projects to ease the path to have systems running locally.


I started using Grafana integrated with InfluxDB for custom system metrics in BrandIndex, when I transitioned away from Graphite. I've been using it since 2016 or so.


This was the first system I used to store custom system metrics for BrandIndex, at YouGov. It was simple and nice to use, but way too limited and slow. Nowadays, there are far better options.


Jenkins was probably the first CI system I got in touch with, I think back in UOL around 2008. I used it at YouGov as well, but fortunately we migrated to GitLab-CI. I think it's too dated, there are better tools available - although it's super robust and extensible, I admit.

jFrog Artifactory

I used Artifactory at one of the jobs I had, for managing artifacts for other engineering teams as part of their CI/CD process. This was between 2022 and 2023.


I used k3s as a base for building the new data backend for BrandIndex - I needed a minikube-like system to be able to run Argo Workflows locally, and I found that k3s was a super light alternative, so I used it - between 2021 and 2022. Really nice technology!


Even though I could use Rancher or similar to check on my K8s workloads and stuff, like some of my colleagues, I always prefer to use kubectl as it's usually faster and allows some nice automations.


At YouGov, we started migrating everything from Velociraptor to Kubernetes, which was a great idea. Took me a while to understand how it works (the learning curve is a bit steep), but now I can't think of any other way to orchestrate deployments in a company than using it, even if for small clusters. (Yeah, I know, Stack Overflow doesn't use it and works pretty great, but they probably built their own things to be able to orchestrate their stuff.)

Linux (GNU/Linux)

I first dabbled in it around 2005 or so; A friend helped me installing Mandrake Linux. It was a hard experience, but fun. I went back to Windows though, as it was easier for me at the time. In 2007 I started using it again though, and this time I started to transition to it definitively. I did have to use Windows and MacOS in some of my jobs, but I much prefer Linux.


I used LXC at Canonical, between 2012 and 2014, and then at YouGov for orchestrating containers in Velociraptor, up to 2020 or so. Haven't used it ever since, and hope I don't need to - I much prefer Open Containers stuff, like Docker, containerd etc.


I only used NewRelic in YouGov, for many systems, as the base metric system for web transactions, database queries etc. It's a good tool, but expensive and usually gets lost with concurrency in Python.


Contributed to the Provy provisioning app, which is now discontinued. Ansible and Salt are better alternatives to it.


I used Sentry a lot in YouGov, in many projects. From 2014 to 2022 (when we used it first on-premises and then as SaaS). I like it, despite the annoying fact that it truncates payloads and makes me pull my hairs trying to guess the rest of the stacktrace that was left out.


I have marginal experience with it, haven't used it much. Only got in touch with it when I joined Yalo. Hopefully I'll use it more in the future.

Data Engineering

Apache Kafka

I used Kafka integrated with Pinot for the revamped internal usage analytics for BrandIndex, at YouGov. This was between 2021 and 2022.

Argo Workflows

I used Workflows as the basis for the ETL I developed for the new data backend for BrandIndex, which was still a prototype when I left. This was between 2021 and 2022.


I've used CSV files and libraries for many years, can't remember since when. I'm not a huge fan of using it programmatically, but I recognize its importance in data exchanging.


Can't remember when I started using JSON, I think it was at UOL back in 2008. Been using it ever since, and my favorite codec library for it in Python is orjson - which at the time of this writing is the fastest JSON library available for Python.


I used the numexpr to develop fast and runnable expressions in BrandIndex, so that they both determined basic calculations and also communicated formulas with the data scientists.


My first contact with NumPy was in the old BrandIndex, back in 2015 or so, when I had to optimize some calculations. Then, when I created the new BrandIndex and the whole data analysis feature in it, I made extensive use of NumPy (along with Pandas and Xarray as well). I'm a big fan of it.


I first started using Pandas in BrandIndex, at YouGov, probably around 2017 or so - a bit later than NumPy. It's a really nice library, but it can get super slow and consume lots of memory. Nowadays, I'm more conscious about when to use it - and when not to. (In some situations I would probably use Polars instead.)


I only really experimented with Polars, when working at BrandIndex, but it's a very promising library, and I would probably use it as a replacement for Pandas in many scenarios.


Fantastic library, Xarray. I first got in touch with it when trying to work with Pandas "panels", but learned that the feature was deprecated and that they recommended Xarray instead for dealing with multi-dimensional datasets.

Web servers

Apache HTTP Server Project

This was the first HTTP server I had touch with, when I started developing in PHP. Since I started using Nginx I haven't used it much, though. This was between 2003 and 2007, approximately.


I started using gevent when working in the customer-facing SSO in YouGov, in around 2014, and then as part of the new BrandIndex we launched in 2018. I really like it, and it makes it easy to write asynchronous code in a synchronous style, but this thing can get finicky with context switching between coroutines, sometimes - it's very reliable for the vast majority of the time though.


I first learned about Nginx when I read a blog post from WebFaction (now a defunct company) telling about how fast it was compared to httpd (Apache). So I tried it, loved it, and it's been my HTTP server of choice ever since. Been using it since 2006 or so, as far as I remember.


uWSGI is one of the best pieces of technology I've ever seen, and it's painful to admit that it's now obsolete and discontinued. People see it as just a web server, but it's much more than that - matter of fact, I used its in-memory caching mechanism in BrandIndex, and it's as fast as using in-process Python caching - but with the benefit of sharing the cache between worker processes! That and the emperor/vassals feature are some of my favorite ones.

I wish somebody created a more modern web server, inspired by it, maybe in Rust. Maybe I can start something like that, who knows - it's quite a tall order though.

Networking and distributed systems architectures

0MQ (zeromq)

I used 0MQ first for experiments I did many years ago, to check how I could build distributed systems with it. Then, I used it at a legacy data backend (QQ7) in BrandIndex, at YouGov. This was between 2014 and 2022.


I only really experimented with gRPC when I was building the new data backend for BrandIndex. This experiment was around 2022. The reason why I didn't keep using it was because the performance gains with it were not so significant to justify migrating from the much simpler OpenAPI approach.


Most of my experience with HTTP is for versions 1.0 and 1.1, but I also experimented a bit with HTTP/2 (for gRPC) and at some point I'd like to try HTTP/3 as well.

OpenAPI 3

When I was rebuilding BrandIndex from scratch, I had to decide how to appropriately expose the APIs from the different services I was creating. I went through a number of different standards and protocols, and the one that attended me the best was OpenAPI 3 - both because of its flexibility and because of the libraries available. Been using it since 2017 or so.


I had the opportunity to implement RESTful, or REST-ish, APIs in a number of projects I worked at. Nowadays, my preference is to implement public APIs through OpenAPI first, but try to make them as compliant to REST as possible.

Service-Oriented Architecture (SOA)

My first experience with SOA was at Dafiti, back in 2011, when I had to work on different services distributed by subject. Well, it was a frontend and a backend systems, not too complex or too distributed, but it was the first time I started getting conscious about the importance of SOA.

Then, at Canonical and then at YouGov, I used a lot more of it, with multiple systems integrated to each other.


I don't have super advanced levels of understanding of TCP, but I know how it basically works, in terms of delivery guarantees and reliability, and this is enough for me. Using it bare, instead of HTTP, for me, is a super rare situation, I always end up using other protocols instead that are built on top of it, like 0MQ.


Most of my experience with UDP was when sending InfluxDB metrics through their wire protocol. One interesting use of this protocol for me was when, at BrandIndex, I had to migrate a system from on-premises to AWS, and this broke the InfluxDB metrics because of firewall rules. The sysops couldn't restore this integration, and I still needed those metrics there at InfluxDB. At first, I tried using the RESTful API, which was a terrible choice (because of performance and resource usage at the client), but then I figured I could use Telegraf to receive the metrics via UDP packets and then re-route them to the InfluxDB via its RESTful API. Having Telegraf as a sidecar process in each relevant container did the trick, and saved our metrics!


OAuth 2.0 / OpenID Connect

I used a lot of OAuth2/OIDC when working on the customer-facing SSO at YouGov, and then when leading the SSO team. Both for user auth (Authorization Code grant type) and for app auth (Client Credentials grant type).

Password encryption

In both Canonical and YouGov I found systems using bad password encryption, and got the green flag to migrate them to more appropriate algorithms and hashing functions. At YouGov, I created a library to help with password validation according to the NIST recommendations.

Authoring tools

Adobe Flash

One of the first things I started learning when building websites, back in 2000. I started when it was still a Macromedia technology (before Macromedia got bought by Adobe). I did plenty of animations in this thing. This was between 2000 and 2007, approximately.

Libraries in general


I used this library to connect more easily my PHP programs with MySQL, before I jumped to PDO. I used it from about 2003 to 2008.


I started using PDO as a replacement for ADOdb probably in around 2008 or so, can't remember exactly.


I used Pydantic between 2021 and 2022, it's a fantastic library! But I would be careful about RAM usage and performance though, depending on the situation.

Version Control Systems


I used Bazaar when I worked at Canonical, for versioning projects, integrated with Launchpad. This was between 2012 and 2014.


I used bitbucket in a number of different points of my career, but most notably when I worked at YouGov, versioning Mercurial stuff, before we moved everything to git. I think this was between 2014 and 2017 or so.


My first contact with git was around 2010 or so. I was coming from an SVN world, so it took a while to wrap my head around it. It's been my favorite VCS since then.


I don't really remember when I started using GitHub, but I guess it was just after I started learning git, so probably around 2010. I've used it mostly for storing projects and fetching libraries, but lately I've been doing some things with GH Actions too, for CI.


My first contact with GitLab was at YouGov, in around 2017 or so. We first started using it for repo storage only, and then used it for CI and CD as well. It's an amazing ecosystem, I really like it.


I used Gitorious at Dafiti, when we migrated the entire codebase from SVN to git. It was not an easy feat though - the migration had to be redone a few times until we could get it right without losing history. This was around 2012 I guess. Thankfully the project is now defunct and superseded by GitLab.


I used Mercurial in YouGov and maybe a few open source projects here and there. Haven't used it for years now, and I don't miss it.

Subversion (SVN)

My first experience with SVN was at UOL, back in 2008, as my first VCS. Then I also used it at Dafiti. I'm glad that I never had to touch it anymore.

Template engines

Apache Velocity

I used the Velocity template engine when working for a product called Emprego Certo, at UOL. This was between 2008 and 2009.


I used Jinja at YouGov for a number of different projects, for almost 9 years. It's my favorite template engine.


Smarty was the first template engine I used, with PHP, around 2006 or so, up to 2008. Haven't used it since though.



cProfile is my tool of choice for performance profiling in Python - even though it doesn't work well with asyncio and multi-threading or multi-processing. I usually create pstats files with it and then visualize them in snakeviz.

Test-Driven Development (TDD)

It took me a while to get the gist of TDD. When I started learning it, in around 2008 or so, I thought it was nonsense. But then the whole "TDD is about design" realization thing got me and I finally saw how crucial it is for building high quality and high reliability systems. Since then I've always been trying to convince other engineers to use it.


I don't know if anybody even uses XAMPP today, but it was very helpful for me to get Apache, PHP and MySQL installed in my Windows machine when I was starting to program. I used it between 2003 and 2008 or so, as far as I remember. Maybe it's still useful for Windows folks - which is not my case anymore.