Recently I encountered a specific situation in which a customer was forced to stay with the MyISAM engine due to a legacy application using tables with over 1000 columns. Unfortunately InnoDB has a limit at this point. I did not expect to hear this argument for MyISAM. It is usually about full text search or spatial indexes functionality that were missing in InnoDB, and which were introduced in MySQL 5.6 and 5.7, respectively, to let people forget about MyISAM. In this case though, InnoDB still could not be used, so I gave the TokuDB a try.

I’ve created a simple bash script to generate a SQL file with CREATE TABLE statement with the number of columns I desired and then tried to load this using different storage engines. Bit surprisingly, InnoDB failed with column count above 1017, so little more then documented maximum of 1000:

MyISAM let me to create maximum 2410 columns and I could achieve the same result for the TokuDB table! Tried with tinyint or char(10) datatype, same maximum cap applied, not quite sure why it’s exactly 2410 though.

So if you have that rare kind of table schema with that many columns and you wish to be able to use a transaction storage engine, you may go with TokuDB, available also with recent Percona Server 5.6 versions.

You can find more details about column number limits in MySQL in this post, “Understanding the maximum number of columns in a MySQL table.”

3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Reiner Rusch

I would say the problem is more historical and was made wrong just from the beginning.
What is a real fact which forces to use a 1000 column table???

Reiner Rusch

Yes, ths problem is very common.
The point is to learn from this and not to teach others doing same mistakes again and again.
As a dba I see such things directly and it’s amazing what you could get out of an old machine by optimizing instead of just buying a newer one and to hope this would solve a problem (for a long term…)! 😉

And yes, there’s a gap between what you are told in school about normalization and what your application afforts in real.
This problem increases while more and more application uses the table in different ways.
My solution is to measure and profile events again and again….!