pecl:mysqlnd_ms
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
pecl:mysqlnd_ms [2012/04/16 14:05] – [Raw Bin ideas (RFCs)] uw | pecl:mysqlnd_ms [2012/04/17 14:14] – [Raw Bin ideas (RFCs)] uw | ||
---|---|---|---|
Line 162: | Line 162: | ||
* [open] More fail over options | * [open] More fail over options | ||
* [open] Silent and automatic connection fail over if server returns configured error code | * [open] Silent and automatic connection fail over if server returns configured error code | ||
+ | * NOTE: this may require refactoring of four filters. | ||
* [open] Automatic on-connect fail over, if activated, shall be done in a loop until a connection can be opened. Currently we stop after the first attempt. If automatic fail over is on, we try " | * [open] Automatic on-connect fail over, if activated, shall be done in a loop until a connection can be opened. Currently we stop after the first attempt. If automatic fail over is on, we try " | ||
+ | * [open] Remember failed hosts for the duration of a web request (the plugins' | ||
* [open] Support directing statements manually to a group of nodes for more efficient server cache usage | * [open] Support directing statements manually to a group of nodes for more efficient server cache usage | ||
* [open] Refine QoS session consistency server selection policy | * [open] Refine QoS session consistency server selection policy | ||
* [open] Support "wait for GTID". Currently we loop over all servers until we find a matching one. MySQL 5.6 allows SQL users either to fetch the latest GTID or SQL users can ask for a GTID and their request will block until the GTID has been replicated on the server. We should support the latter logic as well. | * [open] Support "wait for GTID". Currently we loop over all servers until we find a matching one. MySQL 5.6 allows SQL users either to fetch the latest GTID or SQL users can ask for a GTID and their request will block until the GTID has been replicated on the server. We should support the latter logic as well. | ||
+ | * [open] Remember the most current server and test this one first when searching for a GTID (a synchronous server). Use of cached information is possible for the duration of a read-only request sequence. The cache must be flushed and refreshed for every write. | ||
* [open] Improve load balancing | * [open] Improve load balancing | ||
Line 217: | Line 220: | ||
== Problem to solve / Idea == | == Problem to solve / Idea == | ||
- | MySQL replication is asnychronous. Slaves are //eventual consistent// | + | MySQL replication is asynchronous. Slaves are //eventual consistent// |
MySQL replication used for read scale-out requires applications to be able to cope with stale data. If stale data is acceptable, a replication system may replace a MySQL slave read access with an access to a local, eventually stale, cache. **The cache access will lower network latency resulting in faster reply to the query. Furthermore, | MySQL replication used for read scale-out requires applications to be able to cope with stale data. If stale data is acceptable, a replication system may replace a MySQL slave read access with an access to a local, eventually stale, cache. **The cache access will lower network latency resulting in faster reply to the query. Furthermore, | ||
Line 351: | Line 354: | ||
Whenever the plugin tries to connect to a node it may fail to do so. Connection attempts can be made when opening a connection or later, when executing a statement (lazy connections). By default the plugin bails out if the connection attempt fails. To make using the plugin as transparent as possible it is possible to optionally enable automatic fail over in case of failed connection attempts. | Whenever the plugin tries to connect to a node it may fail to do so. Connection attempts can be made when opening a connection or later, when executing a statement (lazy connections). By default the plugin bails out if the connection attempt fails. To make using the plugin as transparent as possible it is possible to optionally enable automatic fail over in case of failed connection attempts. | ||
+ | The fail over logic itself is basic. Smaller improvements will make it much more capable. | ||
== Feature description == | == Feature description == | ||
- | Up to version 1.3 the automatic fail over stops after trying one alternative. For example, if a connection to A fails we try B. If connecting to B fails we stop and bail out. In that case the user must handle the error although there may be node C and D, ... which could be used. | ||
- | In the future the search for an alternative shall not stop after B but continue until a connection has been established or there are no alternatives. If a connect to A fails, the plugin shall try B, C, D and so forth. Automatic fail over, automatic connection attempts shall not stop after trying B and failing to connect to B. | + | Automatic fail over is basic. We shall: |
+ | |||
+ | - make it configurable whether fail over node search tries one or all possible alternatives | ||
+ | - link fail over to certain error codes | ||
+ | - remember failed hosts to skip them for the rest of the web request | ||
+ | |||
+ | Up to version 1.3 the automatic fail over stops after trying one alternative. For example, if a connection to A fails we try B. If connecting to B fails we stop and bail out. In that case the user must handle the error although there may be node C and D, ... which could be used. In the future the search for an alternative shall not stop after B but continue until a connection has been established or there are no alternatives. If a connect to A fails, the plugin shall try B, C, D and so forth. Automatic fail over, automatic connection attempts shall not stop after trying B and failing to connect to B. | ||
Whether this shall become a new default or become configurable is to be decided. | Whether this shall become a new default or become configurable is to be decided. | ||
Furthermore, | Furthermore, | ||
+ | |||
+ | Upon request the plugin will remember failed nodes for the duration of a web request. If connecting to a node has failed once no further attempts will be made to connect to the node. This may lead to situations where nodes are skipped although they became available again in the meantime. This is ignore because most web requests are short-lived. | ||
=== Support directing statements manually to a group of nodes for more efficient server cache usage === | === Support directing statements manually to a group of nodes for more efficient server cache usage === | ||
Line 368: | Line 379: | ||
In large clusters users can improve performance by optimizing query distribution and server selection using criteria such as cache usage, distance or latency. Application developers shall be allowed to annotate a statement in a way that its executed on a certain group of nodes. | In large clusters users can improve performance by optimizing query distribution and server selection using criteria such as cache usage, distance or latency. Application developers shall be allowed to annotate a statement in a way that its executed on a certain group of nodes. | ||
+ | Server cache usage can be optimized by distributing queries in a way that they hit hot caches. For example, clients may want to run all accesses to table A, B and C on the node group 1 and table accesses to D preferably on group 2. Because we do not support automatic table filtering/ | ||
Some nodes may be co-located to clients whereas others may be remote. This is often done in large systems when storing multiple copies, e.g. on the machine, on the rack, in the same data center, in a different data center. In case of MySQL Replication its unlikely to find such highly optimized setups, however, there may be nodes closer to the client than others. Nodes closer to a client may be given a certain alias or group name and application developers shall be allowed to hint routing to a group of such nodes. | Some nodes may be co-located to clients whereas others may be remote. This is often done in large systems when storing multiple copies, e.g. on the machine, on the rack, in the same data center, in a different data center. In case of MySQL Replication its unlikely to find such highly optimized setups, however, there may be nodes closer to the client than others. Nodes closer to a client may be given a certain alias or group name and application developers shall be allowed to hint routing to a group of such nodes. | ||
+ | == Feature description == | ||
+ | |||
+ | For every node in the configuration users shall be able to set one or more group names. A SQL hint, for example, MS_GROUP=name can be used to hint the load balancer to direct a request to a certain node group. | ||
+ | |||
+ | |||
+ | === Refine QoS session consistency server selection policy === | ||
+ | |||
+ | == Problem to solve / Idea == | ||
+ | |||
+ | Users can request session consistency. Session consistency guarantees that a user will only be redirected to nodes that have already replicated his changes. Currently we check the status of all configured nodes before we pick a node for statement execution. Checking the status causes extra load on the nodes. | ||
== Feature description == | == Feature description == | ||
+ | Two additional ways for finding candidates help lowering the overhead: | ||
+ | |||
+ | * Wait for GTID | ||
+ | * Cache/ | ||
+ | |||
+ | There are two ways to check in MySQL 5.6 if a server has replicated a GTID. One can ask a node whether a GTID has been replicated and either get an immediate response (yes/no) or delay the reply until the node has replicated the GTID (wait for GTID). Currently only the first logic is used. We shall also support "wait for GTID". In that case we pick a candidate and wait until the candidate has replicated the GTID. This will lower the checking overhead as only one node is checked but not all configured nodes. | ||
+ | |||
+ | Furthermore we can persist GTID/ | ||
pecl/mysqlnd_ms.txt · Last modified: 2017/09/22 13:28 by 127.0.0.1