CPU High Performance INNODB MYSQL
I learned something amazing about MySQL. If you allocate a single monolithic InnoDB Buffer Pool that is bigger that Total Installed Divided By Number of Physical CPUs, your will incite the OS to regular intervals memory swapping due to a full InnoDB Buffer Pool. MySQL 5.5’s option known as innodb_buffer_pool_instances
can be used to split up the buffer pool. Yesterday, I properly
implemented this for the client I mentioned in my answer last year. I
still have 162GB for the client’s Buffer Pool. I have set the server’s
innodb_buffer_pool_instances option to 2 because each DB Server is dual
hexacore. I was thinking of setting it to 12 but then a colleague showed
me a blog from Jeremy Cole on MySQL and Swappiness. After reading it, I put it into practice immediately for my client. I ran this command
numactl --hardware
I saw a mapping of 192GB of server RAM as 96GB to each physical core.
Therefore, I set the innodb_buffer_pool_instances to 2. Things are
looking good right now. I will update my answer to see how this affects
memory swapping for the next 2 montns.
on my.cnf
innodb_buffer_pool_size = ???M
innodb_buffer_pool_instances = ??(default:1.2,4,8)
Allocation memory : http://mysql.rjweb.org/doc.php/memory
Expression: innodb_buffer_pool_size / _ram
Meaning: % of RAM used for InnoDB buffer_pool
Recommended range: 60~80%
Expression: Innodb_buffer_pool_reads / Innodb_buffer_pool_read_requests
Meaning: Read requests that had to hit disk
Recommended range: 0-2%
What to do if out of range: Increase innodb_buffer_pool_size if you have enough RAM.
Expression: Innodb_pages_read / Innodb_buffer_pool_read_requests
Meaning: Read requests that had to hit disk
Recommended range: 0-2%
What to do if out of range: Increase innodb_buffer_pool_size if you have enough RAM.
Expression: Innodb_pages_written / Innodb_buffer_pool_write_requests
Meaning: Write requests that had to hit disk
Recommended range: 0-15%
What to do if out of range: Check innodb_buffer_pool_size
Expression: Innodb_buffer_pool_reads / Uptime
Meaning: Reads
Recommended range: 0-100/sec.
What to do if out of range: Increase innodb_buffer_pool_size?
Expression: (Innodb_buffer_pool_reads + Innodb_buffer_pool_pages_flushed) / Uptime
Meaning: InnoDB I/O
Recommended range: 0-100/sec.
What to do if out of range: Increase innodb_buffer_pool_size?
Expression: Innodb_buffer_pool_pages_flushed / Uptime
Meaning: Writes (flushes)
Recommended range: 0-100/sec.
What to do if out of range: Increase innodb_buffer_pool_size?
Expression: Innodb_buffer_pool_wait_free / Uptime
Meaning: Counter for when there are no free pages in buffer_pool. That is, all pages are dirty.
Recommended range: 0-1/sec.
What to do if out of range: First be sure innodb_buffer_pool_size is set reasonably; if still trouble, decrease innodb_max_dirty_pages_pct
OBSERVATION #1
When committing to use InnoDB, you also need to commit to tuning for multiple cores.
- innodb_read_io_threads : The default is 4. I would raise it 16.
- innodb_write_io_threads : The default is 4. I would raise it 16.
OBSERVATION #2
I see you have innodb_thread_concurrency = 8
. If you set innodb_thread_concurrency
to 0 (which is the default), you will have infinite concurrency. That
let’s the InnoDB storage engine decide how to many threads it feels it
needs and can handle.
OBSERVATION #3
Your DB Server has 12GB of RAM. Your InnoDB Buffer Pool is bigger
than half the RAM. You need to partition the Buffer Pool by setting innodb_buffer_pool_instances to 2. In conjunction with this, you need to run numactl --interleave=all
(Not applicable to VMs).
OBSERVATION #4
I see you have innodb_file_format = Barracuda
. I wish you can go back to innodb_file_format = Antelope
.
Why go back to uncompressed? It tends to bloat the InnoDB Buffer Pool
because compressed and uncompressed data and index pages coexist in the
Buffer Pool. I just wrote about this : See my post innodb_file_format Barracuda
EPILOGUE
Here are some of my past posts on tuning InnoDB
Oct 22, 2012
: How large should be mysql innodb_buffer_pool_size?Jul 23, 2012
: How to get the most out of MySQL on a QuadCore machine with 16 GB of RAM?Jul 21, 2012
: InnoDB - High disk write I/O on ibdata1 file and ib_logfile0Sep 20, 2011
: Multi cores and MySQL PerformanceSep 12, 2011
: Possible to make MySQL use more than one core?Feb 12, 2011
: How do you tune MySQL for a heavy InnoDB workload?
Looking again at your my.ini
, I have some more recommendations in terms of memory usage:
- Set query_cache_size to 0. Yes, I am saying disable the query cache the query cache because it battles head-to-head with InnoDB all the time.
- Lower your innodb_log_buffer_size
to 32M. Granted, having the InnoDB Log Buffer large can save on disk
I/O. Having the InnoDB Log Buffer too large can result in longer commits
and checkpoints take. This could manifest itself as a bottleneck for
each thread, possibly underutilizing cores. - You did not set innodb_log_file_size. The default is 5M. This might be too small. With such a large Buffer Pool, you may want to increase it to 1G. See my post How to safely change MySQL innodb variable ‘innodb_log_file_size’? on how increase
ib_logfile0
andib_logfile1
. - SUPPLEMENTAL INFO : You did not set innodb_fast_shutdown (Default is 1). You should add
innodb_fast_shutdown=0
tomy.ini
.
That will cause all uncommitted transactions to be fully purged from
ibdata1, ib_logfile0, and ib_logfile1 whenever you shutdown mysqld. This
makes for a shorter InnoDB Crash Recovery Cycle on startup. - SUPPLEMENTAL INFO : You did not set innodb_file_per_table. That means everything InnoDB and its grandmother is sitting in ibdata1. Besides data and index pages, there are other moving parts to
ibdata1
. You can spread out disk I/O for InnoDB by separating InnoDB tables from ibdata1. See my StackOverflow post Howto: Clean a mysql InnoDB storage engine? on how to do this and to fully defragmentibdata1
.