With my suggestion, others started to
give their feedback. There were few interesting and informative
feedbacks as listed below.
Sanjeewa Malalgoda: I was
looking into this subject some times back and found some points.
AFAIU only number of transactions is not enough .found some
interesting tools like dbtuna[1] and jetprofiler[2] I hope we can
have a look at them and get some idea. I have tested jetprofiler and
it gives
lot of information related the db
usage.
I went through them and got to this
below conclusion which I made as my reply.
“It is always good to know a person
who has worked in the same area. I went through those 2 tools and
they mainly target the management and administrative aspects of the
db server. It gives us nice graphical representations about existing
data. This can be very useful when understanding the usage patterns
of users. But this do not give us any new informations, this only
presents the date found in the information_schema, logs. As you have
used it you might know more about it, so correct me if I am wrong.”
Jet Profiler
- Install Java 1.6 separately
- Unzip jetprofiler_v2.0.5.zip to the desired folder (e.g. /usr/local/bin/jetprofiler or /home/USER/bin/jetprofiler).
- Run ./jetprofiler
MySQL logs
sudo tail -f
/var/lib/mysql/malinga-Dell-System-Vostro-3450.log
This showed useful in calculating CPU
time taken by each user,
SLOW_QUERY_LOG: this can be used to log
slow (taking high CPU time) queries to a file or a table. this
provide following details about the slow queries
The query
Time
User + Host
Query time
Lock time
Rows sent
Rows examined
Used database
So there we can define a slow query by setting LOG_SLOW_TIME (ex: if i set this to 1 all queries took more than 1sec will be taken as slow queries)
We can use this to measure CPU time. This will ignore small queries but after all they want much difference, and it is possible to set LOG_SLOW_TIME to a smaller value (as a trade-off more CPU time will be used for logging).
Negatives:
We don't account small queries
This depend on time taken to execute, which can be different to real CPU time in congested situations.
The query
Time
User + Host
Query time
Lock time
Rows sent
Rows examined
Used database
So there we can define a slow query by setting LOG_SLOW_TIME (ex: if i set this to 1 all queries took more than 1sec will be taken as slow queries)
We can use this to measure CPU time. This will ignore small queries but after all they want much difference, and it is possible to set LOG_SLOW_TIME to a smaller value (as a trade-off more CPU time will be used for logging).
Negatives:
We don't account small queries
This depend on time taken to execute, which can be different to real CPU time in congested situations.
No comments:
Post a Comment