Since the drastic move to re-architecture StocksCafe and adding support for two new markets at the same time in April, StocksCafe website have became slower. I am aware of that and have been taking significant amount of time trying to improve it.
To be very honest, I am not 100% sure what is the true cause although I am inclined to think that it is due to a summation of reasons such as poor database configuration, increased data volume etc instead of a single cause. Speaking of which, if anyone is an expert in MySQL or MongoDB among us, please reach out 🙂
Anyway, over the weekend, in an attempt to speed things up, I have installed Memcached onto StocksCafe backend. Memcached is a caching system that helps by caching data that are repeatedly used instead of hitting the database every time. It has reduce StocksCafe response time by about 25% as of this writing (based on P99 latency).
Of course, there is no free lunch in this world. Using any caching systems generally come with two type of costs. Increased memory usage and cache invalidation determination. Increased memory usage is not a big problem nowadays since server RAM are inexpensive. Deciding when to update your cache, however, is very tricky. If you update the cache often, it slows you down which is the exact problem you are trying to solve but if you do not update it often enough or rather at the right time, you will then have outdated results in the cache.
In fact, yesterday, a couple of users have already noted “wrong” day change value in their portfolios and other people’s in the shared portfolios. It is actually caused by me making a wrong assumption and hence showing outdated results.
As I will continue to cache more contents of StocksCafe, if you notice outdated contents, please reach out and let me know!
Happy investing!~
Do you use indexes, especially primary keys and unique keys for your database?
I am assuming your CPU and memory utilization is still below 50%. Have you check which are the most used queries and well as which queries are taking the longest time to complete?
Also, do you have sufficient ‘OPEN’ connections allowed for your DB.
P.S. Not an expert, just have some DB experience.
@Martinlam, “Do you use indexes, especially primary keys and unique keys for your database?” => I log all queries >0.3 seconds and create indexes for all of them. However, some queries take >0.3 seconds even with indexes hence I believe the problem is elsewhere.
@knic, “Have you check which are the most used queries and well as which queries are taking the longest time to complete?” => Yes, I log all slow queries that take >0.3 seconds and create indexes for them. However, some queries take >0.3 seconds even with indexes hence I believe the problem is elsewhere.
“Also, do you have sufficient ‘OPEN’ connections allowed for your DB.” => Hmm… What is sufficient? What is a correct way to monitor?
show full processlist;
show variables where variable_name = “max_connections”;
I assume that I want “show full processlist;” rows to be close to max_connections? There seems to be only 11 current connections whereas max_connections is 150
I am not an expert in MySQL.
Usually, my sysadmin/app folk will use APM to identify the bottleneck of system slowness.
Hey evan, the updates have noticeably got faster. Just want to drop a note and say thanks! Appreciate the effort.
@hanneng, thanks. It would be great if you could share the specific APM they are currently using.
@seekingprivatereturn, great to know 🙂