Mysql notes

From Federal Burro of Information
Revision as of 23:44, 15 May 2019 by David (talk | contribs)
Jump to navigationJump to search

Advice

Sean Hall with some really good MySQL advice:

  1. Tune those queries. Biggest bang for your buck. Enable the slow query log and watch it. Once you've found a heavy resource intensive query, optimize it! Tune what receives real-world traffic.
  2. Employ Master-Master Replication. Immediately have a read-only slave for your application to hit as well.
  3. Use Your Memory. Set innodb_buffer_pool_size, key_buffer_size and other key options.
  4. RAID Your Disk I/O. Use RAID 10 mirroring and striping. On EC2 striping across a number of EBS volumes using the Linux md software raid.
  5. Tune Key Parameters. speeds up inserts & updates with innodb_flush_log_at_trx_commit=2; create a tablespace and underlying datafile for each table with innodb_file_per_table.

More mysql advice: http://mysql.rjweb.org/doc.php/memory

Disk usage / Indexes v total

SELECT
CONCAT(sum(ROUND(data_length / (1024 * 1024 * 1024), 2)), 'G') DATA,
CONCAT(sum(ROUND(index_length / (1024 * 1024 * 1024),2)), 'G') INDEXES,
CONCAT(sum(ROUND((data_length + index_length) / (1024 * 1024 * 1024), 2)), 'G')
'TOTAL SIZE'
FROM information_schema.TABLES
ORDER BY data_length + index_length;

+-------+---------+------------+
| DATA  | INDEXES | TOTAL SIZE |
+-------+---------+------------+
| 1.22G | 0.66G   | 1.87G      |
+-------+---------+------------+
1 row in set (14.14 sec)

mysql>

---

from where I don't know, did I write this?

I had multiple clients at a previous employer that needed help managing change on their databases. The tough answer is without time and resources to do backups, and restores you must trust that the change is good, which is to say you must TEST to be sure it's good. And that's tough.

In those situations where the change was reasonably well contained , i.e. simple , We would work with the client to write an alter script that rolled out the change ( and any data transforms that you might have needed to do ) and an alter script to roll back the change (and data transforms backwards). Then we could compare and contrast dumps to be sure that what we asked for was what we got and that the app worked as it needed to do before and after. ( would the app still work if we rolled back?).

It's a lot of time an energy but as it turns out in many cases the crown jewels are in the database so it matters enough to spend the time and energy.

Another important aspect of managing change in a reasonably complex application is partitioning. Put your less important data in a separate, reasonably decoupled databases. What you will find is that the Crown jewels are a lot smaller and it will be easier to manage change on that data.

Also, make sure that your DBAs and Devs understand the architectural layout of the app in production. I once saw a situation where the dev team explained an upcoming change indicating that step one was to bring down all the database servers so the code could be upgraded everywhere at the same time... Someone neglected to mention that as a global service provider someone somewhere was awake and using the app 24/7 and you could never take down ALL the database servers at the same time.

disk usage by table

SELECT TABLE_SCHEMA, TABLE_NAME, 
       round(((DATA_LENGTH + INDEX_LENGTH) / 1024 / 1024), 2) As "Approximate size (MB)" 
FROM information_schema.TABLES
WHERE TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema');

diffable dumps

mysqldump --extended-insert=FALSE