I'm sure I'm late to the party, but I recently started delving into MySQL variables. It may or may note help with the problem I'm currently dealing with, that of excessively lengthy queries.
As far as I can tell, MySQL be default doesn't log anything, which makes sense if you consider a production web site where your database queries need to be lightning fast. So that's fine, but on web a development machine, speed isn't the principle concern. Since this isn't my full-time gig anymore, I'm not always concerned about efficiency. I have the luxury of just putting down my laptop and walking away to do something else for a while. However, I do enjoy learning new things, so now I'm thinking if I can enable some MySQL logging, I might not have to write as much code to do the same thing. In any case, here are some of the things I'll forget and have to look up at some point, likely next summer. From the MySQL command prompt:
> SHOW variables where Variable_name = 'general_log';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| general_log | OFF |
+---------------+-------+
1 row in set (0.00 sec)
> SET GLOBAL general_log = 'ON';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| general_log | ON |
+---------------+-------+
1 row in set (0.00 sec)
OK, But where is the log file?
> SHOW variables where Variable_name = 'general_log_file';
Et voila. The location of my log file was /usr/local/mysql/data/
.
There's a ton of log other variables which you can view with the '%' wildcard just as with any query. The general_log file may be useful, but what I really thought might help is the slow_query_log. Enabling this variable and running the query that is causing me problems yielded the desirable result.
$ tail /usr/local/mysql/data/mysqlHost.log
# Query_time: 15.507301 Lock_time: 0.000184 Rows_sent: 5263 Rows_examined: 38237
# Query_time: 15.522489 Lock_time: 0.000176 Rows_sent: 5263 Rows_examined: 27711
# Query_time: 15.325020 Lock_time: 0.000193 Rows_sent: 5263 Rows_examined: 38237
As you can see from the log, the database isn't that big. Maybe a few thousand in the primary table and a couple hundred thousand in the join table, with a where clause to narrow down results. This shouldn't be taking 15 seconds!
This StackOverflow page helped me solve the problem. I followed the recommended solution and prefaced my slow query with the EXPLAIN
keyword, resulting in the output shown below and on stack overflow.
Using join buffer (block nested loop)
It turns out that most of the tables I'm using (data provided by an external source) have neither a PRIMARY_KEY nor an INDEX. Adding an INDEX to the table using the column contained in the JOIN statement sped up the query by over two orders of magnitude (>15 seconds to under 100 milliseconds). Now that's fast! Just shows how little I understand about this stuff, and I hope it helps someone else. Now to deploy this fix to my production server...