A Cost to Picking Junk

Seth Godin wrote an interesting small piece about curation in times of internet giants.

There are some interesting bits on this small piece:

They were curators and their curation led to promotion and attention. There was a cost to picking junk, and a benefit to earning trust.

The tech giants have surrendered that ability, with the costs and benefits that come with it. They end up disrespecting creations and their creators.

Or:

The platforms are built on the idea that the audience plus the algorithm do all the deciding. No curation, no real promotion, simply the system, grinding away.

This goes back a bit to something I wrote previously on the impact of surrendering all power to tech giants, who are increasingly the web gatekeepers - controlling access to information. We are in the process of surrendering all power of information curation, access and discovery to the same entities, with all the impact and risk that comes with it.

PyInstaller + PyFiglet = Trouble

This essay is a slightly extended version of a comment I made on PyInstaller’s official repository, regarding PyInstaller and PyFiglet integration, as well as some known issues.

PyFiglet allows you to create really large letters out of ordinary text, as per figlet’s own description, which essentially results in interesting displays commonly seen in command line interfaces. One example of this would be:

# From wwww.figlet.org.
 _ _ _          _   _     _     
| (_) | _____  | |_| |__ (_)___ 
| | | |/ / _ \ | __| '_ \| / __|
| | |   <  __/ | |_| | | | \__ \
|_|_|_|\_\___|  \__|_| |_|_|___/

Whenever you want to freeze a Python package with PyInstaller, PyInstaller will run a series of built-in or custom hooks which define what a specific package needs in order to be freezable. PyFiglet doesn’t have a built-in hook in PyInstaller so someone suggested that a hook should be created so that everyone could use PyInstaller to freeze applications that use PyFiglet. This is where things started to become troublesome.

As it appears, PyFiglet uses pkg_resourcesas a way to find out which fonts you would like to use for the displayed text. A runtime hook in PyInstaller - these are different from the pre-runtime hooks mentioned above - is registering pkg_resources.NullProvider which means that everything that uses pkg_resources will forcibly use the NullProvider, triggering the error: NotImplementedError: Can't perform this operation for unregistered loader type. In essence, this means that the registered provider for pkg_resources is incompatible with importing some packages for freezing.

Since, we cannot register a new provider, as the NullProvider will always be registered first, and we also can’t easily change the core of PyInstaller, in order to register a different provider, I’ve come upon a workaround that builds on top of the suggestions in the mentioned discussion.

We start by adding a custom hook to be run by PyInstaller, suggested in the initial issue:

# hook-pyfiglet.py

from PyInstaller.utils.hooks import collect_all

datas, binaries, hiddenimports = collect_all("pyfiglet")

This will force PyInstaller to import all known PyFiglet modules, as well as data. Following that, we need to force registering a new provider, somewhere inside the script that will be frozen, before any use of pyfiglet occurs. An example of this will be:

import pkg_resources
import sys

from pyimod03_importers import FrozenImporter

if getattr(sys, 'frozen', False):
   pkg_resources.register_loader_type(FrozenImporter, pkg_resources.DefaultProvider)

As I said in my comment, in essence, instead of registering NullProvider, we now register FrozenImporter with DefaultProvider, which inherits from NullProvider anyway - but doesn’t error apparently. I couldn’t figure out any side effects from this but I would advise caution, when using this workaround, as well as thorough testing, to guarantee that there are no side effects that affect the script that is going to be frozen.

FOSDEM 2019 - A Review

Introduction

This year I decided to make the trip to Belgium and attend FOSDEM 2019, on February 2 and 3. I have an affection for open source software and I, obviously, had an interest in seeing the conference. I was also curious to understand what the environment would feel like, at a big conference - never have I went to anything larger than 500 people.

I had a few talks scheduled, mostly around stuff I’m working currently. Infrastructure-related talks, how to manage infrastructure, how to use specific tools such as Kubernetes and some talks about DNS. More than the talks though, I was interested in wandering through the conference.

I’ve been present on both days, Saturday and Sunday, and I’ll be going over the talks I sat through.

Saturday

I’ve gotten to the conference around the beginning of the afternoon and decided to go straight to a talk, leaving the wandering for afterwards.

It started with Challenges With Building End-to-End Encrypted Applications - Learnings From Etesync. This talk felt a bit like advertising to me, to be honest, but, then again, a lot of talks at FOSDEM are intended to present projects or updates of specific projects. It was interesting to see the problems that devs went through, when building end to end encrypted applications, but I would have liked to hear about some of the solutions. By the end of this talk, I was confronted with the first instance of organization issues, when the organization cut off the speaker, because he was running out of time. There were still about 5 minutes until the end of the scheduled talk - 10 minutes, if you count the time to empty the room - but the organization decided that it was more relevant to do Q&A, than to let the speaker finish the presentation.

I went to DNS over HTTPS - the good, the bad and the ugly after that by Daniel Stenberg, creator of cURL. I was really excited about this talk, because of the speaker and the topic. I was disappointed, as I couldn’t hear most of the talk due to sound issues. The room was a huge auditorium and the sound was clear, which meant that we couldn’t hear more than half of what the speaker said. Such a shame.

Saturday was finished by going to Codifying infrastructure with Terraform for the future. This was a really entertaining and interesting talk. Anton Babenko, the speaker, was witty and laid out some of the best practices, that one should aim for, when managing Terraform resources and modules. I am a newbie to Terraform and this was very interesting, from my perspective, to get a broader view of the end result of using Terraform.

Sunday

I got into the conference around the same time as Saturday. I decided to first listen to Open Source at DuckDuckGo. This talk was given at the same auditorium where, during Saturday, I had sat through the DNS-related talk, without hearing anything. The sound problems persisted and I had to give up and leave.

Saddened, I decided to go to Computer Games with MicroPython. This was unexpected because I didn’t plan on going to this talk. Nonetheless, I am glad I went because it was very interesting. This talk explained how one can use MicroPython to build tiny games that are played and programmed in microcontrollers, what challenges exist in that field - mostly because of resource restrictions - and how to overcome them.

I stayed in the same room, to listen to a somewhat more technical talk on Extending Numba. This talk reviewed how to make Numba, a high-performance Python compiler, accept and process things it isn’t able to, originally. The talk was very hands-on, with code snippets being shown and explained. It was interesting to get out of my comfort zone, by listening to a talk on a framework I didn’t know, performing computations I didn’t understand-

After that, I decided to get out of my comfort zone again and go to How to build an automatic refactoring and migration toolkit. Again, a room without microphones…I couldn’t hear anything again for 20 minutes. The room was also so packed, which meant I couldn’t leave. Horrible.

After another disappointment, I went Fighting spam for fun and profit which was a talk about the new updates on SpamAssassin, which have been baked during the last 3 years. This talk was short, simple and straight to the point - like a good talk should be. This got my hopes up but we were already ending the conference.

For the last talk, I decided to go to Writing a CNI - as easy as pie. This talk was about Container Network Interfaces in Kubernetes. It had the added bonus that, given the technicalities of the talk, it was mostly empty so I was hoping for a good talk. I was not disappointed!! It was the best talk I went to. The speaker walked us through what CNIs are and how they work. He then showed us, with interactive programming, how to code a basic CNI for Kubernetes, from scratch, in about 10 minute - which worked at the end of the demo. The first iteration of the CNI had a small bug and failed to initialize, the time was running out…at the last minute, he fixed the bug and the crowd cheered!

Conclusions

At the end, it was a nice experience. Nonetheless, I have to say that having everyone volunteer for the organization can sometimes, unwittingly, cause the kinds of organizational issues that I mentioned: technical issues and confusion in some of the rooms. On the other hand, I noticed that a lot of people go to FOSDEM to meet other people and catch up, rather than to listen to talks. I feel like attending with this perspective, different than what I went there to do, might result in a completely different experience of the entire conference.

contributing to pytest

A few weeks back, I started collaborating on a project that I had just recently been using. I started writing Python and need a test framework to go along with it. After searching for a bit online, I found out that pytest [1] is currently an alternative to the standard unittest [2]. By looking at the examples I felt that pytest was less verbose and less cumbersome than unittest so I started using it and was quickly fascinated. This is a post about how I started contributing to pytest. I’ve considered not writing this as it seems to be too personal for people to want to know but this is my blog after all.

Upon starting my usage of pytest I started looking around for easy issues I could solve. Since I’m not an expert in Python, nor an average coder in it, to be honest, I started looking around for documentation issues that I might help with. Documentation also seems to be something people always try to get away from, I guessed my help would have been appreciated. I landed on Fixture docs - possible point of confusion which seemed easy enough. There was a confusing wording in the documentation of the pytest fixtures functionality and there was a need to change that wording. A solution had been proposed and I just started changing the documentation. After I had made my changes, I submitted a pull request. It didn’t go as smoothly as I had planned, unfortunately. I forgot to go through one step of contributing guidelines and it ended up breaking my build. Nonetheless, pytest’s maintainers were very understanding and helped me review the PR and fix all the problems. In the end, it got merged in 2 weeks. Given it was a rather large change to the documentation, I was pretty excited.

With that submission resolved I decided to try another one. I submitted another small pull request to give users a formal way to academically cite pytest whenever they used it in academia. It wasn’t that hard and again most of it had been previously defined by the maintainers. Either way, it was very enjoyable to be able to involve myself with the community and give back. That PR has already been merged too, which means that I have made 2 full contributions!

After that, something very interesting happened. I ended up being added as a Member to the pytest-dev organization which means I can now contribute more freely and to more pytest projects under that umbrella. This was really unexpected for me since I screwed up that initially PR but, at the same time, it was very exciting to feel that my contributions were of value. This is also an incentive to keep trying to contribute to OSS which is something I’ve been trying to get active at for quite some time.

It is not that I suddenly have a wealth of knowledge on the topic or that I can, out of a sudden, magically contribute to anything but it is important to understand that everything starts with first steps. Those steps then build, incrementally, to form something entirely new that was unexpected. That’s the importance of trying, to me at least. As a closing note, I’m now trying to work around an issue (-p does not pick up value if there is no space). Let’s see if I can make something out of it.

References

[1] https://docs.pytest.org/en/latest/
[2] https://docs.python.org/3/library/unittest.html

PostgreSQL 11 (Beta)

PostgreSQL 11 (Beta) is out with some very interesting features worth taking a look at. PostgreSQL is already my go-to relational database and - no science to support that - is the easiest, most performant relational database management system. I’ve tried MySQL, Oracle and SQL Server which, in reality, might be a signal that my standards are not very high. Nonetheless, I’m always happy when I see a new release of PostgreSQL. I’ve based my analysis by skimming through the release notes.

There have been some interesting updates, performance-wise. These have been announced on the release homepage. It seems that partitioning and parallel operations have been improved, through allowing new actions or actual performance improvements in current functionality.

Apart from parallel operations and partitioning, there are also some interesting performance improvements for indexing, such as allowing hash index pages to be scanned, or useful features, such as including columns in indexes that are not part of unique constraints but might be available for “index-only scans”. Another announced change is the introduction of a Just-In-Time compiler which is supposed to speed up some parts of the query plan which, in turn, will improve the query execution. This JIT compiler is based on LLVM. I expect that this is going to introduce some interesting speedups, especially when analyzing WHERE statement (like the documentation suggests). I’ll be interested in understanding, further down the road, the impact this feature actually has - one way to use this is to run EXPLAIN as it should indicate JIT usage.

The most interesting points for me are security related, of course, and it seems that PostgreSQL keeps improving in that regard too. There’s been an improvement over the ability to specify complex LDAP specifications and we can now use LDAP through TLS, which is an interesting improvement since it means we can run LDAP integration through encrypted channels. Some roles and permissions have been created to perform sensitive operations - accessing the file system and large object importing/exporting - that were previously only controlled through super users, which means that we have better tools to manage access control, with more precise policy implementations over user’s abilities.

I have only spoken about a handful of changes, related to performance and security, but there’s been a lot of them! These release notes spread across twelve different modules and there’s been news in regards to almost every section of PostgreSQL, either with bug fixes or new functionality.

Keeping an SSH Connection Alive from a Client

Sometimes you run scripts that take a lot of time to finish, through SSH. Also, the server, sometimes, closes your connection. This is bound to create issues. I have recently faced similar issues and went searching for a possible solution. Turns out that there’s a simple solution for this. From ssh_config [1]:

ServerAliveInterval Sets a timeout interval in seconds after which if no data has been received from the server, ssh(1) will send a message through the encrypted channel to request a response from the server. The default is 0, indicating that these messages will not be sent to the server. This option applies to protocol version 2 only.

Open ~/.ssh/config on an editor and type:

Host example.com
    ServerAliveInterval 300

This will send a message to the server every 300 seconds (5 minutes) which will keep your connection alive. You can tweak that value to something that works out best for each use case. If you want to apply this to all hosts just replace example.com with *.

This is a pretty simple way of avoiding connections closed, when using nohup or & is not an option.

References

[1] https://linux.die.net/man/5/ssh_config

Understanding SSL Chaining

SSL is a mess, sometimes. It is increasingly important, yet it is also hellishly difficult to debug and, if not perfectly understood, it seems that it can also be a pain to implement correctly. With the rise of automated tooling to help with implementation and management of SSL (certificate-wise and more), we have seen an increase of encryption usage in our HTTP connections ([1], [2]).

What SSL exactly? Putting it simply, it is a cryptographic protocol that standardizes the way an encrypted connection between two nodes is established. The acronym itself stands for Secure Sockets Layer. When we say we are connecting with HTTPS, we are essentially asserting that our connection uses the HTTP protocol to exchange information, inside a SSL-compliant communication channel. It is important to mention that SSL, although commonly used to refer to a secure communication channel, has been deprecated in favor of TLS (Transport Layer Security). They are essentially the same thing, at a higher level, although there are obvious differences between the two, at lower level, and TLS has replaced SSL.

Recently, I came across an SSL configuration issue that was getting hard to debug. After turning Apache’s logs upside down I was able to verify that, somewhere along the way, Apache was logging the following error:

alert unknown ca

As with everything surrounding SSL, the error message is fairly cryptic, short and concise. I had no idea what to do with that error message. I pasted it on Google and hoped for the best. Google brought me a StackOverflow question [3] that wasn’t exactly the same. Nonetheless I read it and the answer, after explaining the most probable reason, for the issue at hand, offered the following notice:

Another reason might be that you’ve used the correct certificate but failed to add the necessary chain certificates.

This looked promising. I looked up my configuration and, sure enough, my Apache configuration was pointing at wrong file for SSLCertificateChainFile. This left me with a question: what is exactly is this chaining sorcery?

DNSimple has a very thorough explanation on the concept [4]:

There are two types of certificate authorities (CAs): root CAs and intermediate CAs. In order for an SSL certificate to be trusted, that certificate must have been issued by a CA that is included in the trusted store of the device that is connecting.

If the certificate was not issued by a trusted CA, the connecting device (eg. a web browser) will then check to see if the certificate of the issuing CA was issued by a trusted CA, and so on until either a trusted CA is found (at which point a trusted, secure connection will be established) or no trusted CA can be found (at which point the device will usually display an error).

Not all CAs (Certificate Authorities), entities that issue digital certificates, are trusted by all devices. At the same time, CAs that are not trusted by a device can still sign digital certificates, which means that, for any number of devices, there are an awful lot of certificates that might not be directly trustable. For this reason, people came up with a smart way of managing this situation. Each device has a central repository (TrustSstore) which defines which CAs it trusts. When a device, usually through a browser, looks up on a specific endpoint (www.google.com, for example), it fetches the digital certificate of that endpoint. The CA that signed that certificate might not be a trusted entity on that device. When that happens, the device initiates an attempt to establish a connection, between the CA that issued a certificate and any CA that the device actually trusts, based on the relationship between the CA that issued a certificate and the parent CAs of that CA. This trust relationship, again from DNSimple, is defined as a “chain of trust” [5].

For th trust relationships to work, each certificate that is signed has a series of intermediate certificates, issued by intermediate CAs that are supported by the root CAs. This allows each of those intermediate CAs to sign several other certificates that will, in the end, be trusted, given the “chain of trust” that exists to the original trusted CA. This allows to keep the number of root CAs at a low level, making it easier to validate and secure, while, at the same time, delegating and decentralizing the work of signing certificates on a higher number of nodes.

In my previous example, the fact that I had misconfigured SSLCertificateChainFile meant that Apache, who was communicating with my clients, was telling them that the certificate they were receiving had been signed by an intermediate CA, that my clients didn’t recognized as trustworthy. By configuring it properly, Apache started sending the intermediate certificate, essentially saying “here’s the certificate of the CA that signed this certificate” thus my clients were able to establish the “chain of trust” necessary to validate the connection. Of course, all of the inner workings of this are complex but I think this paints an interesting and complete high-level picture.

In the end, SSL chaining is a concept that has been devised to allow a complex environment of CAs to be intertwined while, theoretically, maintaining an easily traceable cluster of trustworthy CAs and certificates. As we have seen recently ([5], [6] and [7]), this “chain of trust” can be easily manipulated and is not as thorough as we sometimes would wish for but, for now, its probably the best we have and we should keep working towards a fully encrypted web.

References

[1] https://www.wired.com/2017/01/half-web-now-encrypted-makes-everyone-safer/

[2] https://transparencyreport.google.com/https/overview?hl=en

[3] https://serverfault.com/questions/793260/what-does-tlsv1-alert-unknown-ca-mean

[4] https://support.dnsimple.com/articles/what-is-ssl-certificate-chain/

[5] https://support.dnsimple.com/articles/what-is-ssl-root-certificate/

[6] https://www.scmagazine.com/google-proposes-revoking-symantec-certs/article/646293/

[7] https://www.securityweek.com/google-wants-symantec-certificates-replaced-until-chrome-70

[8] https://www.cyberscoop.com/trustico-digicert-ssl-certificates-revoked/

A Tale of Kernel Module Loading and Wireless Issues

Recently, I have had to reinstall my OS (Ubuntu) several times due to a mix of filesystem and HDD corruption issues. This, invariably, has led me to a situation where Asus K450J’s wireless networking doesn’t work. My go to solution, so far, has been to look into an answer [1] in StackOverflow and fix the issue with the solution proposed. I have never given too much thought why this happens, or why the solutions resolves it, but it is intriguing that there are no explanations on the answer. The answer has no votes, wasn’t accepted and had no comments on it, which sparked my interest even further. Why does this work?!

Solution

The solution, presented in the answer, is the following block of code:

sudo tee /etc/modprobe.d/blacklist-acer.conf <<< "blacklist acer_wmi"

Run this on the terminal, reboot laptop and we are done.

Deconstructing

Starting with the easier part of it, there are 2 clear commands on this answer:

  • sudo;
  • tee.

From sudo’s man page [2], we conclude the following:

sudo allows a permitted user to execute a command as the superuser or another user, as specified by the security policy.

Since the command doesn’t refer any other user, we can safely assumed that we are running the command as the superuser. This makes sense since, at least on Ubuntu’s fresh installation, /etc is a directory not writable by any and every user. tee is just a simple program that allows standard input (STDIN) to be redirected to standard output (STDOUT) and, at the same time, to send that output to some files. This can be easily confirmed in tee’s man page [3]. So far it has been fairly easy and straightforward.

The argument that is passed to tee is for the file /etc/modprobe.d/blacklist-acer.conf. What is this file and why might we care writing to it? A quick read over The Linux Documentation Project’s pages [4] makes me conclude that /etc is mostly a directory where files, that several programs use, are stored. So…what program uses the /etc/modprobe.d/ directory? As it might seem obvious by now, specially after reading TLDP’s pages, the program that uses this directory is modprobe [5], which can be used “to add or remove modules from the Linux Kernel” [6]. By reading [5] again, I figure out that writing blacklist acer_wmi tells modprobe to ignore this module. Tthe reasons for that are also described, in the documentation (emphasis mine):

Modules can contain their own aliases: usually these are aliases describing the devices they support, such as “pci:123…“. These “internal” aliases can be overridden by normal “alias” keywords, but there are cases where two or more modules both support the same devices, or a module invalidly claims to support a device: the blacklist keyword indicates that all of that particular module’s internal aliases are to be ignored

At this point I start to become suspicious that acer_wmi is a Linux Kernel module that messes up Asus’ wireless configuration on a Ubuntu’s fresh installation, by claiming to support a device that it really doesn’t support. Another user, using a Lenovo Thinkpad E420 claims similar issues on Ubuntu 11.10 [7], on the grounds that:

The only glitch is the wifi drivers which don’t run by default, and it could be corrected easily

Another article also seems to point towards issues with acer_wmi [8]:

A certain kernel module, called acer_wmi, causes problems on some laptops. Because it has been loaded when it shouldn’t have been.

I have found other occurrences of this problem, in different laptops both from Lenovo and Asus. Which leads me to question: why is acer_wmi loaded by default?

Why?

By reading the acer-wmi.txt [9], which is a sort of README of the acer-wmi.c Linux kernel module [10], it says: “On Acer laptops, acer-wmi should already be autoloaded based on DMI matching.” Now, using this line of thinking, we have to look up what “DMI matching” actually stands for. Searching the Linux mailing list, we find a 2007 patch [11] that enabled “DMI-based module autoloading” with the following explanation:

The patch below adds DMI/SMBIOS based module autoloading to the Linux kernel. The idea is to load laptop drivers automatically (and other drivers which cannot be autoloaded otherwise), based on the DMI system identification information of the BIOS.

Interesting. Let’s look up what the DMI for my Asus laptop looks like (BIOS-wise):

BIOS Information
        Vendor: American Megatrends Inc.
        Version: 216

(...)

System Information
        Manufacturer: ASUSTeK COMPUTER INC.
        Product Name: X450JF
        Version: 1.0
(...)
        BIOS Revision: 4.6

(...)

I redacted a lot of information to keep this simple. So, clearly my manufacturer is Asus and my model is X450JF…why would the kernel load up an Acer module?

Listing loaded modules, and searching for Asus drivers, using lsmod | grep asus returns:

asus_nb_wmi            28672  0
asus_wmi               28672  1 asus_nb_wmi
sparse_keymap          16384  1 asus_wmi
asus_wireless          16384  0
wmi                    24576  4 asus_wmi,wmi_bmof,mxm_wmi,nouveau
video                  40960  3 asus_wmi,nouveau,i915

So…Asus drivers are currently being loaded, which means they can be loaded, but why were the Acer ones selected to begin with? I have no idea.

For now, that’s okay. I have understood the high-level reason why it wasn’t working in the first place. I don’t really know why the kernel loads up the wrong module but, by looking at both modules’ code, they have very different approaches to verifying what type of machine it’s dealing with.

Conclusion

What started out as an investigation into a simple command-line expression, ended up being a fun investigation into the surface of kernel module loading. I didn’t find exactly the reason why it loads the incorrect module but, from searching, I see that this is an issue which has an open (although expired) bug in the Linux bug tracker.

References

[1] https://askubuntu.com/questions/879581/wi-fi-does-not-work-on-asus-k450j

[2] https://www.sudo.ws/man/1.8.3/sudo.man.html

[3] http://man7.org/linux/man-pages/man1/tee.1.html

[4] https://www.tldp.org/LDP/sag/html/etc-fs.html

[5] https://linux.die.net/man/5/modprobe.d

[6] https://linux.die.net/man/8/modprobe

[7] https://exain.wordpress.com/2011/10/16/wireless-on-ubuntu-11-10-and-lenovo-thinkpad-e420/

[8] https://sites.google.com/site/easylinuxtipsproject/internet#TOC-Turn-the-kernel-module-acer_wmi-off

[9] https://github.com/spotify/linux/blob/master/Documentation/laptops/acer-wmi.txt

[10] https://github.com/torvalds/linux/blob/master/drivers/platform/x86/acer-wmi.c

[11] https://lwn.net/Articles/233385/

The Web's Future

Tim Berners-Lee wrote an open letter about the perils of the internet, as it currently stands, and how it is getting increasingly centralized, on the hands of a few selected companies. This comes from one of the founding fathers of the decentralized web to this should be deeply concerning, yet I don’t think many people are paying attention. It just reiterates what I have shared last month about “Google’s nemesis…”.

Major topics from this open letter:

  • How are we going to get everyone fair and equal access to the internet?
  • How are we going to take back the internet to the level of decentralization it had before? How are we going to take it back from the all powerful gigantic corporations that own it today?
  • How are we going to get more people debating, and contributing their voices, to what the web is going to look like, in the future?

These are not easy answers, and the open letter doesn’t try to aim them, but it is a very interesting read. Surely, it is timely.

"Google's nemesis..."

An excellent article, Google’s nemesis: meet the British couple who took on a giant, won… and cost it £2.1 billion, written by Rowland Manthorpe, in Wired:

For the Raffs, this remains the burning issue, which the technicalities of auctions and algorithms all too often obscure. They see it simply: Google, or any other search engine, should present only impartial results that do not benefit it financially. It sounds idealistic, but why should it? “After all,” says Shivaun, “this is what people want, and what they believe Google still delivers.” This is the end the Raffs keep in sight: an internet in which search is neutral. And to get it they must keep stepping forward, again and again and again.

Just another account of the subtle, obscure and frightening power that companies like Google have. This shouldn’t come as a surprised anymore, given the behavior of these types of companies which has recently come to public. Yet, it is increasingly important to mention. Algorithms are everywhere nowadays and, as long as they become increasingly complex, this black-box view over their actions and outputs will need increased and persistence scrutiny to avoid what has, effectively, become a reality.

The judgement, that is described and analyzed in this article, took more than a decade to reach a tipping-point, involving several high-profile tech giants as plaintiffs (Microsoft, TripAdvisor and Expedia, to name a few). If those companies are afraid to battle Google, what’s left for the rest of us? I wonder.