Dropping database objects in a safe way

A DROP of rain on a TABLE
A DROP of rain on a TABLE

Whenever I worked as a DBA, I was reasonably scared by 💧DROP commands. Don’t get me wrong: despite this, I badly wanted to drop obsolete tables. After that, I could more easily ask developers to fix bad practices, like tables without primary keys or without indexes at all. Similarly, I wanted to drop obsolete columns, especially from very big tables, and to normalise big sparse matrixes (which implies adding a new table and then dropping columns). And I wanted to drop duplicate or unused indexes, as they add a useless overhead to writes and to the optimiser.

But every time a developer said something like “it’s ok, this table is not used anymore”, I knew I could not trust that statement. And actually, from time to time, it proved wrong. Not because I worked with stupid developers! I am sure that they always ran at least a git grep. But maybe the table was used in some hideous job or script, and they missed it.

The same happened with columns. And I already wrote how dropping an index (even an unused one) can break an application.

But don’t let this stop you. Here I will show the process I used to safely drop stuff. Of course not all companies are the same, so the process could vary, and will probably vary in your case.

XKCD strip about SQL injections
Technically, this woman could be in good faith

Check if objects are used

If you use MariaDB or Percona Server, you should definitely enable the user_statistics plugin. It can be useful in many ways. PMM uses it to draw graphs on table usage, and you can probably do the same with other technologies, like Zabbix templates. If a table is used, you will see from a graph.

Check if the, table or column is used in any stored routines, triggers or events. A query with LIKE will help you, unless you have many procedures and you are looking for a table name like order.

Check of course your scripts, including scripts for tasks like backups or anonymisation.

The rest is up to your developers. If they say that a table is not used, you will pretend you believe them. But be sure to check with all relevant teams. This typically includes analysts and data scientists. Don’t use private messages for this type of communications. The best way would be to add tasks in their bug tracker (JIRA or whatever) to ask them to check if a table or column is used. This will give more people the opportunity to report that an object shouldn’t be dropped, and will show that you did all the checks you should do in this cases.

Don’t forget that the same checks must be done for indexes, as they could be mentioned in optimiser hints.

Dropping objects

Whatever you are dropping – a table, a column or an index – the process is the same.

The first step must be dropping the table/column in staging. You already know this, and I won’t stress this obvious concept. Instead, I want to stress that in the real world this is not enough. I saw several cases where a table was never used in staging, but it was used in production, maybe by some job.

You should rename it. Add a prefix, for example __old_. If I can establish a naming convention (a good topic for another time) I disallow table names to start with a _ character. This makes __old_ and other special prefixes safer, so that you can reliably use them in your scripts (for example, excluding __old_ tables from dumps).

The next step is simply waiting. You will have a standard time period to wait until the table is considered safe to delete, for example 2 days. What a reasonable period is depends on factors that are specific to your company, for example the frequency of job runs.

Application errors are hopefully monitored. If something goes bad in that period, you can very quickly restore the old situation by renaming back the table, column or index. Since this operation only affects metadata, it is very fast (unless it’s locked by a long-running query/transaction).

If nothing goes wrong, you can drop the table. But first, you should probably take a backup. Ideally, keep it forever – you never know. In the case of a column, a backup could be a dump of the primary key plus the column to delete.

A small overhead

Note that these extra steps don’t add a big overhead. Once a table is renamed, you can go back to your everyday work. And maybe once a week you can backup and drop renamed tables. This step can even be automated, and quite easily.

A personal note

Do you find this type of advise useful? I can help your team to establish good practices to reduce the risks without slowing down development and operations. Contact me to discover what I can do for you.

Toodle pip,
Federico

Leave a Reply

Your email address will not be published. Required fields are marked *