Michael Eriksson's Blog

A Swede in Germany

Posts Tagged ‘computer security

Meltdown and Spectre are not the problem

leave a comment »

Currently, the news reporting in the IT area is dominated by Meltdown and Spectre—two security vulnerabilities that afflict many modern CPUs and pose a very severe threat to at least* data secrecy. The size of the potential impact is demonstrated by the fact that even regular news services are paying close attention.

*From what I have read so far, the direct danger in other regards seems to be small; however, there are indirect dangers, e.g. that the read data includes a clear-text password, which in turn could allow full access to some account or service. Further, my readings on the technical details have not been in-depth and there could be direct dangers that I am still unaware of.

However, they are not themselves the largest problem, being symptoms of the disease(s) rather than the disease it self. That something like this eventually happened with our CPUs, is actually not very surprising (although I would have suspected Intel’s “management engine”, or a similar technology, to be the culprit).

The real problems are the following:

  1. The ever growing complexity of both software and hardware systems: The more complex a system, the harder it is to understand, the more likely to contain errors (including security vulnerabilities), the more likely to display unexpected behaviors, … In addition, fixing problems, once found, is the harder, more time consuming, and likelier to introduce new errors. (As well as a number of problems not necessarily related to computer security, notably the greater effort needed to add new features and make general improvements.)

    In many ways, complexity is the bane of software development (my own field), and when it comes to complicated hardware products, notably CPUs, the situation might actually be worse.

    An old adage in software development is that “any non-trivial program contains at least one bug”. In the modern world, we have to add “any even semi-complex program contains at least one security vulnerability”—and modern programs (and pieces of hardware) are more likely to be hyper-complex than semi-complex…

  2. Security is something rarely prioritized to the degree that it should be, often even not understood. In doubt, “Our program is more secure!” is (still) a weaker sales argument than “Look how many features we have!”, giving software manufacturers strong incentives to throw on more features (and introduce new vulnerabilities) rather than to fix old vulnerabilities or to ensure that old bugs are removed.

    Of course, more features usually also lead to greater complexity…

  3. Generally, although not necessarily in this specific case: A virtual obsession with having everything interfacing with everything else, especially over the Internet (but also e.g. over mechanisms like the Linux D-bus). Such generic and wide-spread interfacing brings more security problems than benefit; for reasons that include a larger interface (implying more possible points of vulnerability), a greater risk to accidentally share private information*, and the opening of doors for external enemies to interact with the software and to deliberately** send data after a successful attack.

    *Be it through technical errors or through the users and software makers having different preferences. For an example of the latter, consider someone trying to document human-rights violations by a dictatorship, and who goes to great length to keep the existence of a particular file secret, including keeping the file on an encrypted USB drive and cleaning up any additional files (e.g. an automatic backup) created during editing. Now say that he opens the file on his computer—and that the corresponding program immediately adds the name and path of the document to an account-wide list of “recently used documents”… (Linux users, even those not using an idiocy like Gnome or KDE, might want to check the file ~/.local/share/recently-used.xbel, should they think that they are immune—and other files of a similar nature are likely present for more polluted systems.)

    **With the particularly perfidious variation of a hostile maker of the original software, who abuses an Internet connection to “phone home” with the user’s private information (cf. Windows 10), or a smart-phone interface to send spam messages to all addresses in the user’s address book, or similar.

To this might, already or in the future, government intervention, restrictions, espionage, whatnot, be added.

The implications are chilling. Consider e.g. the “Internet of things”, “smart homes”, and similar, low benefit* and high risk ideas: Make your light-bulbs, refrigerators, toasters, whatnot, AIs and connect them to the Internet and what will happen? Well, sooner or later one or more of them will be taken over by a hostile entity, be it a hacker or the government, and good-bye privacy (and possibly e.g. money). Or consider trusting a business with a great reputation with your personal data, under the solemn promise that they will never be abused: Well, the business might be truthful, but will it be sufficiently secure for sufficiently long? Will third-parties that legitimately** share the data also be sufficiently secure? Do not bet your life on it—and if you “trust” a dozen different businesses, it is just a matter of time before at least some of the data is leaked. Those of you who follow security related news will have noted a number of major revelations of stolen data being made public on the Internet during the last few years, including several incidents involving Yahoo and millions of Yahoo users.

*While there are specific cases where non-trivial benefits are available, they are in the minority—and even they often come with a disproportional threat to security or privacy. For instance, to look at two commonly cited benefits from this area: Being able to turn the heating in ones apartments up from the office shortly before leaving work, or down from a vacation resort, is a nice feature. Is is more than a nice-to-have, however? For most people, the answer is “no”. Do I actually want my refrigerator to place an order with the store for more milk when it believes that I am running out? Hell no! For one thing, I might not want more milk, e.g. being about to leave for a vacation; for another, I would like to control the circumstance sufficiently well myself, e.g. to avoid that I receive one delivery for (just) milk today, another for (just) bread tomorrow, etc. For that matter, I am far from certain that I would like to have food deliveries be a common occurrence in the first place (for reasons like avoiding profile building and potential additional costs).

**From an ethical point of view, it can be disputed whether this is ever the case; however, it will almost certainly happen anyway, in a manner that the business considers legitimate, the simply truth being that it is very common for large parts of operations to be handled by third-parties. For example, at least in Germany, a private-practice physician almost certainly will have lab work done by an external contractor (who will end up with name, address, and lab results of the patient) and have bills handled by a factoring company (who will end up with name, address, and a fair bit of detail about what took place between patient and physician)—this despite such data being highly confidential. Yes, the patient can refuse the sharing of his data—but then the physician will just refuse taking him on as a patient… To boot, similar information will typically end up with the patient’s insurance company too—or it will refuse to reimburse his costs…

On paper, I might look like a hardware makers dream customer: In the IT business, a major nerd, living behind the keyboard, and earning well. In reality, I am more likely to be a non-customer, to a large part* due to my awareness of the many security issues. For instance, my main use of my smart-phone is as an alarm clock—and I would not dream of installing the one-thousand-and-one apps that various businesses, including banks and public-transport companies, try to shove down the throat of their customers in lieu of a good web-site or reasonably customer support. Indeed, when we compare what can be done with a web-site and with a smart-phone app (in the area of customer service), the app brings precious little benefit, often even a net detriment, for the customer. The business of which he is a customer, on the other hand, has quite a lot to gain, including better possibilities to control the “user experience”, to track the user, to spy on other data present on the device, … (All to the disadvantage of the user.)

*Other parts include that much of the “innovation” put on the market is more-or-less pointless, and that what does bring value will be selling for a fraction of the current price to those with the patience to wait a few years.

Sadly, even with wake-up calls like Meltdown and Spectre, things are likely to grow worse and our opportunity to duck security risks to grow smaller. Twenty years from now, it might not even be possible to buy a refrigerator without an Internet connection…

In the mean time, however, I advice:

  1. My fellow consumers to beware of the dangers and to prefer more low-tech solutions and less data sharing whenever reasonably possible.
  2. My fellow developers to understand the dangers of complexity and try to avoid it and/or reduce its damaging effects, e.g. throw preferring smaller pieces of software/interfaces/whatnot, using a higher degree of modularization, sharing less data between components, …
  3. Businesses to take security and privacy seriously and not to unnecessarily endanger the data or the systems of their customers.
  4. The governments around the world to consider regulations* and penalties to counter the current negative trends and to ensure that security breaches hurt the people who created the vulnerabilities as hard as their customers—and, above all, to lay off idiocies like the Bundestrojaner!

    *I am not a friend of regulation, seeing that it usually does more harm than good. When the stakes are this high, and the ability or willingness to produce secure products so low, then regulation is the smaller of the two evils. (With some reservations for how well or poorly thought-through the regulations are.)

Advertisements

Written by michaeleriksson

January 7, 2018 at 1:08 am

The declining security of Linux (and sudo considered harmful)

with 2 comments

Naive approaches to computer security have long been a thorn in my side, starting with the long lasting Windows assumption of a single user and user account on a system. (Originally explicit in that no second user account or user control was available; in the last ten-or-so-years in the form that the standard case is one user and one user only—who if at all possible should only ever work with one account.)

Unfortunately, Linux has also taken a turn for the worse over the years, often taken extremely naive approaches, prioritizing the convenience of the inexperienced user over security*, and opening holes that even a highly proficient user is often unaware of—and with more and more holes as time goes by.

*With the dual effect that those who want security have to put in a load of work (and likely still fail) and that many users are not aware of how poor their security is. Notably, the naive users might be pleased about the convenience—but they too are victims of the poor security. I would even argue that because they are naive, there is a greater obligation to protect them through implementing strong default security.

A prime example is the default file permissions (umask), which on most modern systems are set so that anyone can read the files of everyone else… This is so obviously wrong and idiotic that whoever is responsible should be taken out and shot. The obvious correct default behavior, and what matches the reasonable intent on almost all systems, are permissions where either only the owner is allowed to read a file or only the owner and the members of the files “group”*. One of the first things I do with a new installation is to restrict the default file permissions to owner only—if something else is needed for a specific file, I override the default.

*The standard file permissions on Unix-like systems divide the world into the owner, the group, and everyone else. By assigning users to a group, they can be given different access to certain files than “everyone else”, without being the owner.

This misconfiguration is particularly dangerous because it is unexpected, it is often only discovered when it is (potentially) to late, and it requires an over-average amount of knowledge to correct*.

*It is not enough to simply change the default setting: Each and every file that has already been created with that setting must have its individual setting corrected.

Another particularly annoying and dangerous problem is demonstrated by utterly conceptually flawed tools like sudo, pkexec, and polkit: Much like the execution controls in Windows, they assume that a user has a varying amount of rights to do things depending on how he does them. (E.g. through calling a command with or without sudo, or through giving or not giving a password to polkit.) While these tools are intended to increase security, they instead open up ridiculous security holes, and increase the likelihood both of users being given rights that the admins never intended them to have and of hostiles being able to achieve “privilege escalation”*.

*Roughly, an attacker starting with a certain set of rights that do not pose a danger and tricking the system into giving him more rights until he does pose a danger. This is a central part of cracking a computer system.

Consider sudo: The intention of sudo is that when a user executes the command X as “sudo X” (instead of just “X”), it is as if root (the main admin user) executed the same command. Now, what commands are allowed to “sudo” for a certain user is configurable, but this configuration can be a bitch. Take something as harmless as an editor: If the user can “sudo” the editor, he can now change system files, manipulate the password storage, read documents that should be secret, … The system is effectively an open book that a skilled cracker can exploit and infiltrate as he sees fit. OK, so we do not allow editors (and a number of more obvious things like command shells, commands to delete files, and the like). Now what about all the other applications that are not editors but still have the ability to execute editors or have the ability to even just save a file? What about those that can execute commands (e.g. through a “shell escape”—a very common mechanism on Unix-like systems)? They too must be ruled out. Etc. But here is the real devilry: How do we find out what commands have what abilities? This is a virtually impossible task, with many nasty surprises—e.g. that the standard pager (“less”; seemingly only intended to view files) has the ability to launch an editor… The only chance is to reduce the “sudoable” commands to an absolute minimum, carefully verify that minimum, and (more likely than not) conclude that the users now do not receive the convenience that sudo was intended to give them.

The task of configuring sudo is made the harder because most Linux distributions appear to work on the assumption that any system is a single user system (as with Windows above)—and cram down whatever gives the user convenience in the corresponding configuration. Looking at the configuration file /etc/sudoers on my current system*, I find e.g.

*No worries: While the configuration file is still there, the actual sudo program has been removed.

# Allow members of group sudo to execute any command

%sudo ALL=(ALL:ALL) ALL

The comment line says it all.

Now, a good admin would not assigns the group “sudo” to just anyone and would use far more granular settings to give individual users what they need. However, not all admins* are good and this approach practically invites the admin to be lazy and assign rights carelessly. To boot, this makes it ease for the Linux distribution to screw up, because the consequences of a change become hard to predict, e.g. when default group assignments or default configuration entries are altered. In one horrendous case I heard of some months ago, the default configuration actually gave everyone, irrespective of group, the right to “sudo” anything, resulting in a system with no actual security anymore…

*Note that the admin is often quite, quite poor as an admin: Admins are not just found in big enterprises—the family member who takes care of the family’s computers is also an admin.

Others do truly stupid things, like https://help.ubuntu.com/community/Sudoers which gives an example of how to add an editor (!) to the configuration—and this in a section titled “Common Tasks”…

myuser ALL = (root) NOPASSWD:NOEXEC: /usr/bin/vim

This example lets the user “myuser” run as root the “vim” binary without a password, and without letting vim shell out (the :shell command).

Well, preventing “shell out” (more properly “shell escape”, one of the issues I mention above) is good, but obviously the idiot who wrote this has failed to understand that an editor is lethally dangerous too (cf. above). For instance, “sudo vim /etc/shadow” gives a malicious user the possibility to change the root password, after which he can trivially gain a root shell—without needing a “shell out”.

In contrast, the earlier approach was very sound: Either a user account had the right to do something or it did not—end of story. Usually, “did not” applied, when not dealing with the users own files. When more rights were needed to do a task the physical user had to log in with a new user account with more rights in the relevant area (and typically less in other areas!)—if he was trusted with such an account*. Yes, sudo can be more convenient, but that convenience is bought with a horrendous drop in security.

*If he was not trusted, then he correctly had no opportunity to do whatever he wanted to do.

The one saving grace of sudo is that it makes live a little safer for those who would otherwise take even greater risks in the name of convenience, through giving themselves dangerous rights all the time. This, however, is not a valid reason to make life that much less secure for the users who actually try to be secure and know how to handle themselves. This is like noting that condoms reduce pleasure and replacing condoms with some other mechanism which gives more pleasure—but does so at the price of not actually preventing pregnancy and disease transmission…

As a rule of thumb: If someone recommends that you use sudo, discount anything he says on security issues. This tool is simply one of the worst security ideas in the history of Linux.

I have seen some truly absurd cases, e.g. one nitwit who adamantly insisted that logging in as root on a terminal was very dangerous, but still threw sudos around willy-nilly. (While logging in as root is never entirely without danger, a terminal is the least dangerous place to do so, seeing that this reduces the risk of a snooper catching the password, removes the temptation of starting various GUI programs, and drastically reduces the risk of forgetting that one is using the root account and mistakenly doing something stupid.)

Excursion for the pros:

Those who know a little more about Unix security might see a major advantage of sudo in the reduced need for suid-ing programs. This might or might not have been an advantage at some point of time, but I have worked for years without using sudo and I have never needed to change anything in this regard. I conclude that what should work works, be it through appropriate group settings, daemons, or suid programs that are there irrespective of the presence of sudo. In addition, I am not convinced that suid programs, the potential dangers notwithstanding, are a greater evil than sudo, at least not after considering the relative likelihood of an admin doing some stupid—it is not just a question of what approach is the safer technically, but also of what approach gives us the better protection from human errors.

Written by michaeleriksson

December 6, 2016 at 11:07 pm