@mysql_ru

 
Страница 1 из 75
Egor
20.11.2016
12:31:29
Всем привет. Группа создана из-за того, что таковой не нашлось в поисках.

Dmitry
26.11.2016
21:38:00
лол

Egor
26.11.2016
21:57:09
Надо бота сюда добавить, чтобы спрашивал у вступивших о чём-то

Например "Привет @rvncerr, расскажи о том как используешь MySQL?"

Google
Dmitriy
26.11.2016
21:57:37
Никак. :)

У меня он по нагрузке сложится. :)

Alex
26.11.2016
22:01:57
тоже никак, здесь исключительно для того чтобы знать о происках врага )

Egor
26.11.2016
22:44:36
А, чтобы знать больше, то можно подписаться на RSS http://planet.mysql.com/

кстати, есть бот ретранслятор рсс новостей?

Planet MySQL
26.11.2016
23:03:59
Привет, мир!

Egor
26.11.2016
23:04:30
Похоже, настроил и теперь новости из планеты будут сюда пересылаться

Planet MySQL
27.11.2016
19:48:48
Upstart Basics for Automatic Restarts of MaxScale After Crash http://mysqlentomologist.blogspot.com/2016/11/upstart-basics-for-automatic-restarts.html Historically I do not care much about MariaDB's MaxScale, at least since I know how to build it from source when needed. But, as a support engineer who work at MariaDB, sometimes I have to deal with problems related to MaxScale, and this week it happened so that I had to explain what to do to implement automatic restarts of MaxScale "daemon" in case of crashes on RHEL 6.x.In the process I had found out that two of my most often used Linux distributions, CentOS 6.x and Ubuntu 14.04, actually use Upstart, so good old System V's init tricks and scripts work there only partially and only because somebody cared to integrate them into this "new" approach to starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running. I had to do this years ago, but customer's question finally forced me to check and study some details on how this system actually works.So, unfortunately, there is no script like mysqld_safe to start and restart MaxScale after installing official RPM from MariaDB (in this case it was maxscale-2.0.1-2.x86_64). My first idea was to write one, but then I asked myself why it is not there yet, decided to check what's already provided and take a closer look at /etc/init.d/maxscale I have on my CentOS 6.8 VM (the closest to what customer used). It's a typical classic shell script to start service and it starts MaxScale like this:start() {    echo -n $"Starting MaxScale: "...    ulimit -HSn 65535    daemon --pidfile $MAXSCALE_PIDFILE /usr/bin/maxscale --user=maxscale $MAXSCALE_OPTIONS >& /dev/null    RETVAL=$?    [ $RETVAL -eq 0 ] && touch /var/lock/subsys/$servicename...    # Return right code    if [ $RETVAL -ne 0 ]; then       failure       RETVAL=$_RETVAL_NOT_RUNNING    fi    echo    return $RETVAL}Basically, it runs /usr/bin/maxscale --user=maxscale, the rest are details (like location of PID file the rest of the script then relies on). There is no advanced script to monitor status of this process with this PID (like mysqld_safe) or anything to care about automatic restarts. It is supposed that I just execute chkconfig maxscale on and then service starts when system enters proper runlevel.Simple test shows that when maxscale process is killed it's gone and is not restarted. At good old times I'd just add something like this:mx:2345:respawn:/usr/bin/maxscale --user=maxscaleto /etc/inittab (as nice articles like this suggests), but quick check and then further reading proves that it's not going to work on CentOS 6.8, as it uses Upstart.So, either I had to write something similar to mysqld_safe for MaxScale, or (having in mind size of code and number of bugs we had in the past in that script) I had better to find out what is the supposed way to respawn processes in Upstart. Basic ideas are, again, simple. One has to create /etc/init/service_name.conf file and put something like this there (real code quote from one of Ubuntu files for MySQL):start on runlevel [2345]stop on starting rc RUNLEVEL=[016]respawnrespawn limit 2 5...pre-start script...end scriptexec /usr/sbin/mysqldpost-start script...end scriptThe file is easy to understand even without reading the manual. One has to set when service starts and stops, add respawn clause if we want to restart it in case of unexpected crashes or kills of the process, optionally limit the number of restarts and intervals between restarts etc, and, optionally, do something before and after start.I quickly created /etc/init/maxscale.conf based on the above and it did a great job in starting it automatically upon system startup. I've just used exec /usr/bin/maxscale --user=maxscale basically and decided to deal with options and other details later if needed. But what was strange for the very beginning is that in /var/log/messages I've seen what looked like repeated attempts to

start maxscale process, that failed:Nov 26 17:44:45 centos maxscale[20229]: MaxScale started with 1 server threads.Nov 26 17:44:45 centos init: maxscale main process ended, respawningNov 26 17:44:45 centos maxscale[20229]: Started MaxScale log flusher.Nov 26 17:44:45 centos maxscale[20235]: Working directory: /var/log/maxscaleNov 26 17:44:45 centos maxscale[20235]: MariaDB MaxScale 2.0.1 startedNov 26 17:44:45 centos maxscale[20235]: MaxScale is running in process 20235...Nov 26 17:44:45 centos maxscale[20235]: Loaded module qc_sqlite: V1.0.0 from /usr/lib64/maxscale/libqc_sqlite.soNov 26 17:44:45 centos maxscale[20235]: MaxScale is already running. Process id: 20229. Use another location for the PID file to run multiple instances of MaxScale on the same machine.Nov 26 17:44:45 centos init: maxscale main process (20234) terminated with status 4Nov 26 17:44:45 centos init: maxscale main process ended, respawningMoreover, when proper maxscale process was killed, it was NOT respawned as expected.It was just a proper time to read the manual more carefully, this part about expect stanza (that I noted in some of official Upstart scripts):"To allow Upstart to determine the final process ID for a job, it needs to know how many times that process will call fork(2). Upstart itself cannot know the answer to this question since once a daemon is running, it could then fork a number of "worker" processes which could themselves fork any number of times. Upstart cannot be expected to know which PID is the "master" in this case, considering it does not know if worker processes will be created at all, let alone how many times, or how many times the process will fork initially. As such, it is necessary to tell Upstart which PID is the "master" or parent PID. This is achieved using the expect stanza.The syntax is simple, but you do need to know how many times your service forks."Let's check quickly how many times fork() is called in maxscale (I'd know it better if I ever cared to study the source code in details, but I had not checked most part of it yet). Test based on that cookbook gives unexpected results:[root@centos ~]# strace -o /tmp/strace.log -fFv maxscale &[1] 2369[root@centos ~]# sleep 10[root@centos ~]# ps aux | grep straceroot      2369  2.6  0.0   4476   888 pts/0    S    20:28   0:00 strace -o /tmp/strace.log -fFv maxscaleroot      2382  0.0  0.0 103312   868 pts/0    S+   20:28   0:00 grep strace[root@centos ~]# pkill -9 strace[1]+  Killed                  strace -o /tmp/strace.log -fFv maxscale[root@centos ~]# ps aux | grep maxscaleroot      2375  1.3  0.2 276168  3896 ?        Ssl  20:28   0:00 maxscaleroot      2385  0.0  0.0 103312   868 pts/0    S+   20:28   0:00 grep maxscale[root@centos ~]# egrep "\<(fork|clone)\>\(" /tmp/strace.log | wc | awk '{print $1}'5How comes we have 5 fork calls? Here they are:[root@centos ~]# egrep "\<(fork|clone)\>\(" /tmp/strace.log2374  clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fb024c08ab0) = 23752375  clone(child_stack=0x7fb01f819f10, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7fb01f81a9d0, tls=0x7fb01f81a700, child_tidptr=0x7fb01f81a9d0) = 23762375  clone(child_stack=0x7fb01e118f10, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7fb01e1199d0, tls=0x7fb01e119700, child_tidptr=0x7fb01e1199d0) = 23772375  clone(child_stack=0x7fb01d10af10, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7fb01d10b9d0, tls=0x7fb01d10b700, child_tidptr=0x7fb01d10b9d0) = 23782375  clone(child_stack=0x7fb017ffef10, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID,

parent_tidptr=0x7fb017fff9d0, tls=0x7fb017fff700, child_tidptr=0x7fb017fff9d0) = 2379It seems process with PID 2374 actually forked just once, to produce PID 2375, and then several threads were started by that process. We can see them:[root@centos ~]# ps -T -p `pidof maxscale`  PID  SPID TTY          TIME CMD 2375  2375 ?        00:00:00 maxscale 2375  2376 ?        00:00:00 maxscale 2375  2377 ?        00:00:00 maxscale 2375  2378 ?        00:00:00 maxscale 2375  2379 ?        00:00:00 maxscale[root@centos ~]#So, it was really one fork() (and I could notify that even from studying /var/log/messages before) and I have to add expect fork stanza to my Upstart configuration file:[root@centos ~]# cat /etc/init/maxscale.conf# MaxScale servicedescription "MaxScale"start on stopped rc RUNLEVEL=[2345]stop on starting rc runlevel [!2345]respawnrespawn limit 2 5expect forkexec /usr/bin/maxscale --user=maxscaleThis way it works as expected, as one may easily check:[root@centos ~]# initctl status maxscalemaxscale start/running, process 6600[root@centos ~]# maxadminMaxScale> show serversServer 0x19e47a0 (server1)        Server:                              127.0.0.1        Status:                              Master, Running        Protocol:                            MySQLBackend        Port:                                3306        Server Version:                      5.7.15-9-log        Node Id:                             1        Master Id:                           -1        Slave Ids:        Repl Depth:                          0        Number of connections:               0        Current no. of conns:                0        Current no. of operations:           0MaxScale> quit[root@centos ~]# kill -9 6600[root@centos ~]# ps aux | grep maxscalemaxscale  6627  2.0  0.2 276168  3884 ?        Ssl  20:41   0:00 /usr/bin/maxscale --user=maxscaleroot      6633  0.0  0.0 103312   872 pts/0    S+   20:41   0:00 grep maxscale[root@centos ~]# initctl status maxscalemaxscale start/running, process 6627In the /var/log/messages we clearly see that the process is respawned by init:...Nov 26 20:38:15 centos maxscale[6600]: Started MaxScale log flusher.Nov 26 20:40:33 centos maxscale[6600]: Loaded module MaxAdminAuth: V2.0.0 from /usr/lib64/maxscale/libMaxAdminAuth.soNov 26 20:41:52 centos init: maxscale main process (6600) killed by KILL signalNov 26 20:41:52 centos init: maxscale main process ended, respawningNov 26 20:41:52 centos maxscale[6627]: Working directory: /var/log/maxscaleNov 26 20:42:07 centos maxscale[6627]: Loaded module MaxAdminAuth: V2.0.0 from /usr/lib64/maxscale/libMaxAdminAuth.so...After few more checks I asked to implement this officially in packages for the Linux distributions that use Upstart, see MXS-1027.To summarize, I wish I care more to find out how Upstart works long time ago. Now it's time to study systemd probably :) Anyway, after some reading and testing one can use it efficiently to provide automated service starts and restarts for MySQL server and services used with it.

Dmitry
27.11.2016
20:00:51
так себе простыночка

лучше заголовок и ссыка

Google
Egor
28.11.2016
08:19:00
Не умный бот :(

Это от сервиса Manybot, надо что-то получше найти

/add@TheFeedReaderBot

Feed Reader Bot
28.11.2016
08:34:21
Type the URL of: a feed page, a Facebook public page, or a Twitter/Youtube/Instagram account

Egor
28.11.2016
08:34:35
http://planet.mysql.com/rss20.xml

Feed Reader Bot
28.11.2016
08:34:44
Feed successfully added

Here is the latest news from feed 'Planet MySQL':

Planet MySQL Upstart Basics for Automatic Restarts of MaxScale After Crash

Egor
28.11.2016
08:37:59
Так-то лучше

Feed Reader Bot
28.11.2016
16:20:19
Planet MySQL Moving with SQL Server to Linux? Move from SQL Server to MySQL as well! Over the recent years, there has been a large number of individuals as well as organizations who are ditching the Windows platform for Linux platform, and this number will continue to grow as more developments in Linux are experienced. Linux has for long been the leader in Web servers as most of the web servers run on Linux, and this could be one of the reasons why the high migration is being experienced.

Planet MySQL Percona Server 5.6.34-79.1 is now available Percona announces the release of Percona Server 5.6.34-79.1 on November 23, 2016. Download the latest version from the Percona web site or the Percona Software Repositories. Based on MySQL 5.6.34, including all the bug fixes in it, Percona Server 5.6.34-79.1 is the current GA release in the Percona Server 5.6 series. Percona Server is open-source and free – this is the latest release of our enhanced, drop-in replacement for MySQL. Complete details of this release are available in the 5.6.34-79.1 milestone on Launchpad. Deprecated features: Metrics for scalability measurement feature is now deprecated. Users who have installed this plugin but are not using its capability are advised to uninstall the plugin due to known crashing bugs. Bugs fixed: When a stored routine would call an administrative command such as OPTIMIZE TABLE, ANALYZE TABLE, ALTER TABLE, CREATE/DROP INDEX, etc. the effective value of log_slow_sp_statements was overwritten by the value of log_slow_admin_statements. Bug fixed #719368. Thread Pool thread limit reached and failed to create thread messages are now printed on the first occurrence as well. Bug fixed #1636500. INFORMATION_SCHEMA.TABLE_STATISTICS and INFORMATION_SCHEMA.INDEX_STATISTICS tables were not correctly updated for TokuDB. Bug fixed #1629448. Other bugs fixed: #1633061, #1633430, and #1635184. Release notes for Percona Server 5.6.33-79.0 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

KOT
28.11.2016
18:07:39
Это анонс какого года?

Egor
28.11.2016
18:27:51
> November 28, 2016

Это просто обновление с исправлением багов

Feed Reader Bot
28.11.2016
19:38:03
Planet MySQL Making MySQL Better for Operations In the MySQL team, one of our focuses over the last few releases has been improving the usability of the MySQL Server for operations teams.  The results can be seen in a number of areas: Our continued investment in Performance Schema.  …

Planet MySQL Percona Server 5.7.16-10 is now available Percona announces the GA release of Percona Server 5.7.16-10 on November 28, 2016. Download the latest version from the Percona web site or the Percona Software Repositories. Based on MySQL 5.7.16, including all the bug fixes in it, Percona Server 5.7.16-10 is the current GA release in the Percona Server 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.16-10 milestone at Launchpad. Deprecated Features: Metrics for scalability measurement feature is now deprecated. Users who have installed this plugin but are not using its capability are advised to uninstall the plugin due to known crashing bugs. Bugs Fixed When a stored routine would call an administrative command such as OPTIMIZE TABLE, ANALYZE TABLE, ALTER TABLE, CREATE/DROP INDEX, etc. the effective value of log_slow_sp_statements was overwritten by the value of log_slow_admin_statements. Bug fixed #719368. Server wouldn’t start after crash with with innodb_force_recovery set to 6 if parallel doublewrite file existed. Bug fixed #1629879. Thread Pool thread limit reached and failed to create thread messages are now printed on the first occurrence as well. Bug fixed #1636500. INFORMATION_SCHEMA.TABLE_STATISTICS and INFORMATION_SCHEMA.INDEX_STATISTICS tables were not correctly updated for TokuDB. Bug fixed #1629448. Other bugs fixed: #1633061, #1633430, and #1635184. The release notes for Percona Server 5.7.16-10 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

Planet MySQL Big News In Databases 2016 has seen lots of big news. Also when it comes to databases. Let's have a look at those really BIG news.MySQL and MariaDB become modernFor a long time MySQL and MariaDB were mostly limited to the functionality of SQL-92. Now, both decided to go forward by supporting the WITH and OVER clauses in their next major release (MySQL 8.0, MariaDB 10.2). The MariaDB beta release supports both features already while the current MySQL “lab release” only support the WITH clause yet.Parallel execution in PostgreSQL 9.6Commercial databases can do it for decades, but in the open source database scene it is quite unique: Since release 9.6 PostgeSQL can execute single queries on multiple cores and thus make better use of modern hardware for data intensive queries. At the moment, only some special cases can take advantage of in-query parallelism. The next releases will gradually extend this functionality. Speaking of next releases: the next major PostgreSQL release will be 10.0.SQL Server on Linux (Beta)A beta of the next version of SQL Server (currently called “vNext”) was released. Also for Linux. On supported distributions (Red Hat, Ubuntu) the installation is done by the regular package manager. Further, Microsoft provides Docker images. The “What's New” list of vNext is rather modest from developers perspective. However, SQL Server finally gets STRING_AGG. Also noteworthy is that CREATE OR ALTER was added to SQL Server 2016 SP1, which was released at the same time the vNext beta went public.Oracle 12cR2 currently in the cloud onlyOn the one hand, Version 12.2 of the Oracle Database was already released. On the other hand, you cannot download it. Cloud first seems to be the new strategy for Oracle. For developers the new release is not very exciting anyway: Error handling for CASE and LISTAGG, approximate aggregates and, of course, more JSON. Find the full list at the “New Features” documentation.DB learns OFFSET – unfortunatelyDB2 for Linux, UNIX and Windows (short LUW) has got the release 11.1 earlier this year. The general theme of the new SQL features was compatibility. It seems like IBM aims to support every single SQL dialect out there. And so it comes that DB2 supports the SQL:2011 OFFSET clause even without enabling the MySQL compatibility vector. A big setback from my No-Offset movement.SQLite gets Row-ValuesSQLite compensates for this by introducing row values support in version 3.15.0—the SQL-92 feature that make life without OFFSET often more easyFree No-Offset stickerIf you don't know about No-Offset yet, you should definitively read this article. During this years Cyber Monday sale of my book, I'd be glad mail you a No-Offset sticker for free. You'll also get the Use The Index, Luke! and modern SQL stickers. Just fill in this form and get them without any obligation:Order free stickers now“Big News In Databases” by Markus Winand was originally published at Use The Index, Luke!.

Planet MySQL How to build a stable MySQL Replication environment While straight-forward to deploy, a production-ready and hence stable MySQL Replication setup requires a bit of planning and preparation. What does a solid replication configuration look like? What do you do when a topology is broken, and replication will not restart? How do you ensure performance? Or perform schema changes? If you'd like to learn how to build a stable environment with MySQL replication, this webinar is for you. From OS and DB configuration checklists to schema changes and disaster recovery, you’ll have the information needed. Join us next Tuesday as Krzysztof Książek, Senior Support Engineer at Severalnines, shares his top 9 tips on how to best build a production-ready MySQL Replication environment. Top 9 Tips for building a stable MySQL Replication environment Tuesday, December 6th Sign up for the webinar We look forward to “seeing” you there! Agenda Sanity checks before migrating into MySQL replication setup Operating system configuration Replication Backup Provisioning Performance Schema changes Reporting Disaster recovery Speaker Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.       Tags: MySQLmysql replicationtipsbackupperformance

Planet MySQL Percona XtraBackup 2.3.6 is now available Percona announces the release of Percona XtraBackup 2.3.6 on November 29, 2016. Downloads are available from our download site or Percona Software Repositories. Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups. This release is the current GA (Generally Available) stable release in the 2.3 series. New Features Percona XtraBackup now supports SHA256 passwords. Using the SHA256 algorithm requires either SSL encrypted connection, or using public key encryption for password exchange which is only available when both client and server are linked with OpenSSL. Percona XtraBackup now supports Command Options for Secure Connections. NOTE: Due to xbcrypt format changes, backups encrypted with this Percona XtraBackup version will not be recoverable by older versions. Bugs Fixed: Fixed intermittent assertion failures that were happening when Percona XtraBackup couldn’t correctly identify server version. Bug fixed #1568009. Safe slave backup algorithm performed too short delays between retries which could cause backups to fail on a busy servers. Bug fixed #1624473. Fixed new compilation warnings with GCC 6. Bug fixed #1641612. xbcrypt was not setting the Initialization Vector (IV) correctly (and thus is was not using an IV). This was causing the same ciphertext to be generated across different runs (for the same message/same key). The IV provides the extra randomness to ensure that the same ciphertext is not generated across runs. Bug fixed #1643949. Backup would still succeed even if XtraBackup would fail to write the metadata. Bug fixed #1623210. xbcloud now supports EMC ECS Swift API Authorization requests. Bugs fixed #1638017 and #1638020 (Txomin Barturen). Percona XtraBackup would fail to backup MariaDB 10.2 with the unsupported server version error message. Bug fixed #1602842. Other bugs fixed: #1639764 and #1639767. Release notes with all the bugfixes for Percona XtraBackup 2.3.6 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

Planet MySQL Percona XtraBackup 2.4.5 is now available Percona announces the GA release of Percona XtraBackup 2.4.5 on November 29th, 2016. You can download it from our download site and from apt and yum repositories. Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups. New features: Percona XtraBackup now supports SHA256 passwords. Using the SHA256 algorithm requires either SSL encrypted connection, or using public key encryption for password exchange which is only available when both client and server are linked with OpenSSL. Percona XtraBackup now supports Command Options for Secure Connections. NOTE: Due to xbcrypt format changes, backups encrypted with this Percona XtraBackup version will not be recoverable by older versions. Bugs fixed: Percona XtraBackup would crash while preparing the backup, during the shutdown, when the master thread was performing a checkpoint and purge thread was expecting that all other threads completed or were idle. Bug fixed #1618555. Safe slave backup algorithm performed too short delays between retries which could cause backups to fail on a busy server. Bug fixed #1624473. Percona XtraBackup didn’t check the logblock checksums. Bug fixed #1633448. Fixed new compilation warnings with GCC 6. Bug fixed #1641612. xbcrypt was not setting the Initialization Vector (IV) correctly (and thus is was not using an IV). This was causing the same ciphertext to be generated across different runs (for the same message/same key). The IV provides the extra randomness to ensure that the same ciphertext is not generated across runs. Bug fixed #1643949. target-dir was no longer relative to the current directory but to datadir instead. Bug fixed #1611568. Backup would still succeed even if xtrabackup would fail to write the metadata. Bug fixed #1623210. xbcloud now supports EMC ECS Swift API Authorization requests. Bugs fixed #1638017 and #1638020 (Txomin Barturen). Some older versions of MySQL did not bother to initialize page type field for pages which are not index pages (see upstream #76262 for more information). Having this page type uninitialized could cause xtrabackup to crash on prepare. Bug fixed #1641426. Percona XtraBackup would fail to backup MariaDB 10.2 with the unsupported server version error message. Bug fixed #1602842. Other bugs fixed: #1639764, #1639767, #1641596, and #1641601. Release notes with all the bugfixes for Percona XtraBackup 2.4.5 are available in our online documentation. Please report any bugs to the launchpad bug tracker.

Planet MySQL Patching in the Cloud As a developer at another company (not Oracle), I once worked the support phone lines as the philosophy of the company was that developers should be familiar with real customers and their applications. I recall one call on a particular morning:  a customer was crying because she had upgraded her database. The upgrade had destroyed her data. And, she had no backup. Her research data had been created for her PhD thesis. Sadly, she could not complete her degree since her work had been destroyed. This true story has always reminded me of how deadly an upgrade can be. Fortunately, the Oracle MySQL Cloud Service (MySQLCS) software has made the upgrade process more robust. With the MySQLCS software, you can test drive your upgrade process with your application. By spinning up a MySQLCS cloud instance you can: Pre-check the new MySQL version to make sure system requirements such as disk space are adequate before the upgrade process begins. Mark one less thing on your upgrade checklist. Before the patch begins, a quick MySQL Enterprise Backup automatically makes sure you always have a pre-patched version of your database in the cloud and also stored locally on the VM created by the cloud. Down-grade to a previous MySQL version if you see that something is not to your liking with the new patch. Here’s another consideration: when there is a new version of MySQL, the MySQLCS UI notifies you on your patch ‘panel’ that a new MySQL version has been automatically uploaded to the cloud for you. And, you don’t have to upgrade your version of MySQL. If you decide to upgrade, you can upgrade when it is convenient for you. Not when it is convenient for us. When I talked to developers and architects attending the Cloud Expo in Santa Clara a few weeks ago about patching in MySQLCS, they really liked the idea of being able to have so much control over the patching process. Moving your Dev/Test use case to the cloud makes so much sense in these days of agile development where you get not only a refined patching process with MySQLCS but also the ability to test drive your new applications with just-released versions of MySQL. This is only one of the many new features of MySQLCS.  Note that MySQLCS is enterprise-ready with all the proven Oracle MySQL Enterprise features that include tight security mechanisms, automatic, fast backups, and a MySQL monitor that can drill down from query execution times to individual query plans. “The statements and opinions expressed here are my own and do not necessarily represent those of the Oracle Corporation.” For more information about Oracle MySQLCS, visit https://cloud.oracle.com/mysql or call 1.800.ORACLE1 to speak with an Oracle representative, or email me – Kathy.Forte@Oracle.com

Planet MySQL We’ve answered Eurofunk’s database SOS call Eurofunk replaces Oracle with feature-rich Severalnines ClusterControl Today we’re happy to announce Eurofunk, one of the largest European command centre system specialists, as our latest customer. Severalnines was brought on board to help manage the databases used by European blue light services’ command centres who are responsible for dispatching response teams to emergencies. Eurofunk also provides command centres for well-known car manufacturers. Eurofunk began operations in 1969 as a sole trader with a focus on consumer electronics and radio technology. It evolved into a crucial component of the emergency services in Europe, responsible for planning, implementing and operating command centres. To provide efficient blue light services, it is crucial for Eurofunk to have an IT infrastructure which is highly available and fast. Unreliability and slow performance is unforgivable in a sector relying so heavily on speed of execution and directness of action. Severalnines’ ClusterControl was preferred to Oracle because database speed was improved at a fraction of Oracle’s licensing costs. Eurofunk also experienced database downtime caused by prolonged fail-over times of their Oracle databases. With ClusterControl, it was possible to easily deploy an active/active cluster to reduce downtime scenarios. Galera Cluster for MySQL was chosen as a back-end database replication technology; Severalnines provided the platform to deploy, monitor and manage the back-end cluster and associated database load balancers, along with full enterprise support for the operations team. Severalnines also helped Eurofunk improve end user experience for dispatchers working in the control centres. Rolling updates to the database layer is possible so emergency services have continuous access to up-to-date information to work with. Stefan Rehlegger, System Architect, Eurofunk, said, “It’s been hard to find a unified feature-rich database cluster management system in today’s market but we’ve found one that has proved invaluable to our projects. With Severalnines’ help we’ve been able to deploy a centralised system across Europe and we’re planning to expand our usage of ClusterControl to other territories. The deployment via a web interface without any background knowledge of database clustering helps us make services available on a 24h basis more easily. Severalnines also provided great support during systems implementation; it is the database management life-saver for a fast-paced business like ours.” Vinay Joosery, Severalnines CEO, added, “As an outsider who has watched too many TV shows, working in emergency response looks like the coolest thing to do. In reality the pressure command and control centres are under must be unbearable and to do their work effectively, they need the freshest information on accidents and emergencies. I’m happy to see Severalnines’ technology markedly improve the performance of their systems. Eurofunk keeps people safe and if we can keep their database safe and available, it means they can continue doing the great work they do.” About Severalnines Severalnines provides automation and management software for database clusters. We help companies deploy their databases in any environment, and manage all operational aspects to achieve high-scale availability. Severalnines' products are used by developers and administrators of all skills levels to provide the full 'deploy, manage, monitor, scale' database cycle, thus freeing them from the complexity and learning curves that are typically associated with highly available database clusters. The company has enabled over 8,000 deployments to date via its popular ClusterControl product. Currently counting BT, Orange, Cisco, CNRS[...]

Planet MySQL We’ve answered Eurofunk’s database SOS call Eurofunk replaces Oracle with feature-rich Severalnines ClusterControl Today we’re happy to announce Eurofunk, one of the largest European command centre system specialists, as our latest customer. Severalnines was brought on board to help manage the databases used by European blue light services’ command centres who are responsible for dispatching response teams to emergencies. Eurofunk also provides command centres for well-known car manufacturers. Eurofunk began operations in 1969 as a sole trader with a focus on consumer electronics and radio technology. It evolved into a crucial component of the emergency services in Europe, responsible for planning, implementing and operating command centres. To provide efficient blue light services, it is crucial for Eurofunk to have an IT infrastructure which is highly available and fast. Unreliability and slow performance is unforgivable in a sector relying so heavily on speed of execution and directness of action. Severalnines’ ClusterControl was preferred to Oracle because database speed was improved at a fraction of Oracle’s licensing costs. Eurofunk also experienced database downtime caused by prolonged fail-over times of their Oracle databases. With ClusterControl, it was possible to easily deploy an active/active cluster to reduce downtime scenarios. Galera Cluster for MySQL was chosen as a back-end database replication technology; Severalnines provided the platform to deploy, monitor and manage the back-end cluster and associated database load balancers, along with full enterprise support for the operations team. Severalnines also helped Eurofunk improve end user experience for dispatchers working in the control centres. Rolling updates to the database layer is possible so emergency services have continuous access to up-to-date information to work with. Stefan Rehlegger, System Architect, Eurofunk, said, “It’s been hard to find a unified feature-rich database cluster management system in today’s market but we’ve found one that has proved invaluable to our projects. With Severalnines’ help we’ve been able to deploy a centralised system across Europe and we’re planning to expand our usage of ClusterControl to other territories. The deployment via a web interface without any background knowledge of database clustering helps us make services available on a 24h basis more easily. Severalnines also provided great support during systems implementation; it is the database management life-saver for a fast-paced business like ours.” Vinay Joosery, Severalnines CEO, added, “As an outsider who has watched too many TV shows, working in emergency response looks like the coolest thing to do. In reality the pressure command and control centres are under must be unbearable and to do their work effectively, they need the freshest information on accidents and emergencies. I’m happy to see Severalnines’ technology markedly improve the performance of their systems. Eurofunk keeps people safe and if we can keep their database safe and available, it means they can continue doing the great work they do.” About Severalnines Severalnines provides automation and management software for database clusters. We help companies deploy their databases in any environment, and manage all operational aspects to achieve high-scale availability. Severalnines' products are used by developers and administrators of all skills levels to provide the full 'deploy, manage, monitor, scale' database cycle, thus freeing them from the complexity and learning curves that are typically associated with highly available database clusters. The company has enabled over 8,000 deployments to date via its popular ClusterControl product. Currently counting BT, Orange, Cisco, CNRS[...]
, Technicolor, AVG, Ping Identity and Paytrail as customers. Severalnines is a private company headquartered in Stockholm, Sweden with offices in Singapore and Tokyo, Japan. To see who is using Severalnines today visit, http://www.severalnines.com/about-us/company. Tags: MySQLmysql galera clustereurofunkclustercontroloracle

Planet MySQL Emea Webinar-Galera Cluster® Best Practices for DBAs and DevOps Part 2: Taking Full Advantage of Multi-Master Description This webinar will be the second in our series on best practices to follow when using Galera Cluster.In this part, we will discuss important topics related to multi-master setups:Practical considerations when using Galera in a multi-master setupEvaluating the characteristics of your database workloadPreparing your application for multi-masterDetecting and dealing with transaction conflictsThe webinar will conclude with a Q&A session where you can ask any questions you may have about Galera Cluster.Time: 11-12 AM EEST (Eastern Europe Standard Time), 13th of  DecemberSpeakers: Philip Stoev, Quality and Release Manager, Codership                Sakari Keskitalo, COO, CodershipREGISTER TO THE EMEA TIMEZONE WEBINAR

Planet MySQL USA Webinar-Galera Cluster® Best Practices for DBAs and DevOps Part 2: Taking Full Advantage of Multi-Master Description This webinar will be the second in our series on best practices to follow when using Galera Cluster.In this part, we will discuss important topics related to multi-master setups:Practical considerations when using Galera in a multi-master setupEvaluating the characteristics of your database workloadPreparing your application for multi-masterDetecting and dealing with transaction conflictsThe webinar will conclude with a Q&A session where you can ask any questions you may have about Galera Cluster.Time: 9-10 AM PST (Pacific time zone), 13th of  DecemberSpeakers: Philip Stoev, Quality and Release Manager, Codership                Sakari Keskitalo, COO, CodershipREGISTER TO USA TIME ZONE WEBINAR

Planet MySQL Using the InnoDB Buffer Pool Pre-Load Feature in MySQL 5.7 In this blog post, I’ll discuss how to use the InnoDB buffer pool pre-load feature in MySQL 5.7 Starting MySQL 5.6, you can configure MySQL to save the contents of your InnoDB buffer pool and load it on startup. Starting in MySQL 5.7, this is the default behavior. Without any special effort, MySQL saves and restores a portion of buffer pool in the default configuration. We made a similar feature available in Percona Server 5.5 – so the concept has been around for quite a while. Frankly, time has reduced the need for this feature. Five years ago, we would typically store databases on spinning disks. These disks often took quite a long time to warm up with normal database workloads, which could lead to many hours of poor performance after a restart. With the rise of SSDs, warm up happens faster and reduces the penalty from not having data in the buffer pool. Typically, a system reaches 90% of its fully warmed up performance in 10 minutes or less. But since it takes virtually no effort to use, saving the contents of the InnoDB buffer pool is a great feature to enable by default. This blog post looks into some issues with this feature that might not be totally obvious from its name or documentation. #1  By default, MySQL only saves 25% of the most actively accessed pages (by the LRU) in the InnoDB buffer pool (not the whole buffer pool). This is a reasonable choice for many use cases: it saves the most valuable pages, which can then be loaded faster than if you try to load every page in the buffer pool (many of which might not be relevant for continuing workload). You can change this number by setting the innodb_buffer_pool_dump_pct variable. If you’re using InnoDB essentially as an in-memory database, and want to ensure all data is memory resident and can be accessed without a disk read, set it to 100.      Note that this variable is based on the actual amount of data present in memory, not the buffer pool size, For example, if you have a 100GB buffer pool but it only contains 10GB of data, by default only 25% of 10GB (2.5GB) gets saved. (As the manual explains, it will not take nearly as much on disk as only the page identifiers are stored, not full page contents.) #2 MySQL starts and becomes accessible through the network before the buffer pool load on startup is complete. Immediately before the start, a lot of resources fetch buffer pool contents from the disk as quickly as possible, possibly affecting performance. If you have multiple MySQL nodes – like using MySQL Replication or running Percona XtraDB Cluster – you might consider bringing them back for production traffic only after the buffer pool load operation completes. You can monitor the buffer pool load progress by watching the GLOBAL STATUS variable: Buffer pool load is in progress:| Innodb_buffer_pool_load_status          | Loaded 403457/419487 pages         |Buffer pool load is complete:| Innodb_buffer_pool_load_status          | Buffer pool(s) load completed at 161123  9:18:57 |As a side note, it would be great if MySQL would provide a clearer concept of the “State” of the node: being UP versus being READY to serve the traffic in an optimal way are often not the same. #3 InnoDB’s buffer pool preload is not very efficient, at least with fast storage. In my test environment, with a rather capable NVMe storage, I get a more than 400MB/sec warmup rate if I run read-only sysbench workload. The InnoDB’s buffer pool preload warmup rate is around 100MB/sec or so.  I would guess the problem is that it doesn’t drive as many parallel IO requests as SSD storage needs to perform optimally. I did not investigate it further. #4 Innodb buffer pool save/restore only stores the buffer pool cont[...]

Planet MySQL Using the InnoDB Buffer Pool Pre-Load Feature in MySQL 5.7 In this blog post, I’ll discuss how to use the InnoDB buffer pool pre-load feature in MySQL 5.7 Starting MySQL 5.6, you can configure MySQL to save the contents of your InnoDB buffer pool and load it on startup. Starting in MySQL 5.7, this is the default behavior. Without any special effort, MySQL saves and restores a portion of buffer pool in the default configuration. We made a similar feature available in Percona Server 5.5 – so the concept has been around for quite a while. Frankly, time has reduced the need for this feature. Five years ago, we would typically store databases on spinning disks. These disks often took quite a long time to warm up with normal database workloads, which could lead to many hours of poor performance after a restart. With the rise of SSDs, warm up happens faster and reduces the penalty from not having data in the buffer pool. Typically, a system reaches 90% of its fully warmed up performance in 10 minutes or less. But since it takes virtually no effort to use, saving the contents of the InnoDB buffer pool is a great feature to enable by default. This blog post looks into some issues with this feature that might not be totally obvious from its name or documentation. #1  By default, MySQL only saves 25% of the most actively accessed pages (by the LRU) in the InnoDB buffer pool (not the whole buffer pool). This is a reasonable choice for many use cases: it saves the most valuable pages, which can then be loaded faster than if you try to load every page in the buffer pool (many of which might not be relevant for continuing workload). You can change this number by setting the innodb_buffer_pool_dump_pct variable. If you’re using InnoDB essentially as an in-memory database, and want to ensure all data is memory resident and can be accessed without a disk read, set it to 100.      Note that this variable is based on the actual amount of data present in memory, not the buffer pool size, For example, if you have a 100GB buffer pool but it only contains 10GB of data, by default only 25% of 10GB (2.5GB) gets saved. (As the manual explains, it will not take nearly as much on disk as only the page identifiers are stored, not full page contents.) #2 MySQL starts and becomes accessible through the network before the buffer pool load on startup is complete. Immediately before the start, a lot of resources fetch buffer pool contents from the disk as quickly as possible, possibly affecting performance. If you have multiple MySQL nodes – like using MySQL Replication or running Percona XtraDB Cluster – you might consider bringing them back for production traffic only after the buffer pool load operation completes. You can monitor the buffer pool load progress by watching the GLOBAL STATUS variable: Buffer pool load is in progress:| Innodb_buffer_pool_load_status          | Loaded 403457/419487 pages         |Buffer pool load is complete:| Innodb_buffer_pool_load_status          | Buffer pool(s) load completed at 161123  9:18:57 |As a side note, it would be great if MySQL would provide a clearer concept of the “State” of the node: being UP versus being READY to serve the traffic in an optimal way are often not the same. #3 InnoDB’s buffer pool preload is not very efficient, at least with fast storage. In my test environment, with a rather capable NVMe storage, I get a more than 400MB/sec warmup rate if I run read-only sysbench workload. The InnoDB’s buffer pool preload warmup rate is around 100MB/sec or so.  I would guess the problem is that it doesn’t drive as many parallel IO requests as SSD storage needs to perform optimally. I did not investigate it further. #4 Innodb buffer pool save/restore only stores the buffer pool cont[...]
ents on a clear shutdown.  If the server crashes MySQL still does a buffer pool preload, but with the content information saved on last clean shutdown (stored in the ib_buffer_pool  file). This might end up wasting time loading data that is not relevant for the current workload. Periodically running the following ensures a fresh set of pages is available for a quick warmup, even if MySQL crashed:SET GLOBAL innodb_buffer_pool_dump_now=ON;This preserves the current list of buffer pool pages. Note that while you (hopefully) do not see your MySQL crash that often, the same issue exists with backups, MySQL slave cloning with Percona XtraBackup, or LVM snapshot. This causes these operations to be less efficient. I hope the observations in this blog help you put this feature to better use!

Planet MySQL Galera Cache (gcache) is finally recoverable on restart This post describes how to recover Galera Cache (or gcache) on restart. Recently Codership introduced (with Galera 3.19) a very important and long awaited feature. Now users can recover Galera cache on restart. Need If you gracefully shutdown cluster nodes one after another, with some lag time between nodes, then the last node to shutdown holds the latest data. Next time you restart the cluster, the last node shutdown will be the first one to boot. Any followup nodes that join the cluster after the first node will demand an SST. Why SST, when these nodes already have data and only few write-sets are missing? The DONOR node caches missing write-sets in Galera cache, but on restart this cache is wiped clean and restarted fresh. So the DONOR node doesn’t have a Galera cache to donate missing write-sets. This painful set up made it necessary for users to think and plan before gracefully taking down the cluster. With the introduction of this new feature, the user can retain the Galera cache. How does this help ? On restart, the node will revive the galera-cache. This means the node can act as a DONOR and service missing write-sets (facilitating IST, instead of using SST). This option to retain the galera-cache is controlled by an option named gcache.recover=yes/no. The default is NO (Galera cache is not retained). The user can set this option for all nodes, or selective nodes, based on disk usage. gcache.recover in action The example below demonstrates how to use this option: Let’s say the user has a three node cluster (n1, n2, n3), with all in sync. The user gracefully shutdown n2 and n3. n1 is still up and running, and processes some workload, so now n1 has latest data. n1 is eventually shutdown. Now the user decides to restart the cluster. Obviously, the user needs to start n1 first, followed by n2/n3. n1 boots up, forming an new cluster. n2 boots up, joins the cluster, finds there are missing write-sets and demands IST but given that n1 doesn’t have a gcache, it falls back to SST. n2 (JOINER node log):2016-11-18 13:11:06 3277 [Note] WSREP: State transfer required: Group state: 839028c7-ad61-11e6-9055-fe766a1886c3:4680 Local state: 839028c7-ad61-11e6-9055-fe766a1886c3:3893n1 (DONOR node log), gcache.recover=no:2016-11-18 13:11:06 3245 [Note] WSREP: IST request: 839028c7-ad61-11e6-9055-fe766a1886c3:3893-4680|tcp://192.168.1.3:5031 2016-11-18 13:11:06 3245 [Note] WSREP: IST first seqno 3894 not found from cache, falling back to SSTNow let’s re-execute this scenario with gcache.recover=yes. n2 (JOINER node log):2016-11-18 13:24:38 4603 [Note] WSREP: State transfer required: Group state: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:1495 Local state: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:769 .... 2016-11-18 13:24:41 4603 [Note] WSREP: Receiving IST: 726 writesets, seqnos 769-1495 .... 2016-11-18 13:24:49 4603 [Note] WSREP: IST received: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:1495n1 (DONOR node log):2016-11-18 13:24:38 4573 [Note] WSREP: IST request: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:769-1495|tcp://192.168.1.3:5031 2016-11-18 13:24:38 4573 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.You can also validate this by checking the lowest write-set available in gcache on the DONOR node.mysql> show status like 'wsrep_local_cached_downto'; +---------------------------+-------+ | Variable_name | Value | +---------------------------+-------+ | wsrep_local_cached_downto | 1 | +---------------------------+-------+ 1 row in set (0.00 sec)So as you can see, gcache.recover could restore the cache on restart and help service IST over SST. This is a major resource saver for most of those graceful shutdowns. gcache revive doesn’t work if . . . If gcache pag[...]

Planet MySQL Galera Cache (gcache) is finally recoverable on restart This post describes how to recover Galera Cache (or gcache) on restart. Recently Codership introduced (with Galera 3.19) a very important and long awaited feature. Now users can recover Galera cache on restart. Need If you gracefully shutdown cluster nodes one after another, with some lag time between nodes, then the last node to shutdown holds the latest data. Next time you restart the cluster, the last node shutdown will be the first one to boot. Any followup nodes that join the cluster after the first node will demand an SST. Why SST, when these nodes already have data and only few write-sets are missing? The DONOR node caches missing write-sets in Galera cache, but on restart this cache is wiped clean and restarted fresh. So the DONOR node doesn’t have a Galera cache to donate missing write-sets. This painful set up made it necessary for users to think and plan before gracefully taking down the cluster. With the introduction of this new feature, the user can retain the Galera cache. How does this help ? On restart, the node will revive the galera-cache. This means the node can act as a DONOR and service missing write-sets (facilitating IST, instead of using SST). This option to retain the galera-cache is controlled by an option named gcache.recover=yes/no. The default is NO (Galera cache is not retained). The user can set this option for all nodes, or selective nodes, based on disk usage. gcache.recover in action The example below demonstrates how to use this option: Let’s say the user has a three node cluster (n1, n2, n3), with all in sync. The user gracefully shutdown n2 and n3. n1 is still up and running, and processes some workload, so now n1 has latest data. n1 is eventually shutdown. Now the user decides to restart the cluster. Obviously, the user needs to start n1 first, followed by n2/n3. n1 boots up, forming an new cluster. n2 boots up, joins the cluster, finds there are missing write-sets and demands IST but given that n1 doesn’t have a gcache, it falls back to SST. n2 (JOINER node log):2016-11-18 13:11:06 3277 [Note] WSREP: State transfer required: Group state: 839028c7-ad61-11e6-9055-fe766a1886c3:4680 Local state: 839028c7-ad61-11e6-9055-fe766a1886c3:3893n1 (DONOR node log), gcache.recover=no:2016-11-18 13:11:06 3245 [Note] WSREP: IST request: 839028c7-ad61-11e6-9055-fe766a1886c3:3893-4680|tcp://192.168.1.3:5031 2016-11-18 13:11:06 3245 [Note] WSREP: IST first seqno 3894 not found from cache, falling back to SSTNow let’s re-execute this scenario with gcache.recover=yes. n2 (JOINER node log):2016-11-18 13:24:38 4603 [Note] WSREP: State transfer required: Group state: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:1495 Local state: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:769 .... 2016-11-18 13:24:41 4603 [Note] WSREP: Receiving IST: 726 writesets, seqnos 769-1495 .... 2016-11-18 13:24:49 4603 [Note] WSREP: IST received: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:1495n1 (DONOR node log):2016-11-18 13:24:38 4573 [Note] WSREP: IST request: ee8ef398-ad63-11e6-92ed-d6c0646c9f13:769-1495|tcp://192.168.1.3:5031 2016-11-18 13:24:38 4573 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.You can also validate this by checking the lowest write-set available in gcache on the DONOR node.mysql> show status like 'wsrep_local_cached_downto'; +---------------------------+-------+ | Variable_name | Value | +---------------------------+-------+ | wsrep_local_cached_downto | 1 | +---------------------------+-------+ 1 row in set (0.00 sec)So as you can see, gcache.recover could restore the cache on restart and help service IST over SST. This is a major resource saver for most of those graceful shutdowns. gcache revive doesn’t work if . . . If gcache pag[...]
es are involved. Gcache pages are still removed on shutdown, and the gcache write-set until that point also gets cleared. Again let’s see and example: Let’s assume the same configuration and workflow as mentioned above. We will just change the workload pattern. n1, n2, n3 are in sync and an average-size workload is executed, such that the write-set fits in the gcache. (seqno=1-x) n2 and n3 are shutdown. n1 continues to operate and executes some average size workload followed by a huge transaction that results in the creation of a gcache page. (1-x-a-b-c-h) [h represent transaction seqno] Now n1 is shutdown. During shutdown, gcache pages are purged (irrespective of the keep_page_sizes setting). The purge ensures that all the write-sets that has seqno smaller than gcache-page-residing write-set are purged, too. This effectively means (1-h) everything is removed, including (a,b,c). On restart, even though n1 can revive the gcache it can’t revive anything, as all the write-sets are purged. When n2 boots up, it requests IST, but n1 can’t service the missing write-set (a,b,c,h). This causes SST to take place. Summing it up Needless to say, gcache.recover is a much needed feature, given it saves SST pain. (Thanks Codership.) It would be good to see if the feature can be optimized to work with gcache pages. And yes, Percona XtraDB Cluster inherits this feature in its upcoming release.

Planet MySQL Pronouncing Database Terms It is the business of educated people to speak so that no-one may be able to tell in what county their childhood was passed. -- A. Burrell, A Handbook for Teachers in Public Elementary School, 1891 The terms that reveal where a person (mis)spent a DBMS-related childhood are "char", "data", "GIF", "gigabyte", "GUI", "JSON", "query", "schema", "tuple", "_", "`", and "«". CHAR (1) Like "Care" because it's short for "Character" (so hard C and most folks say "Character" that way)? (2) Like "Car" because it's short for "Character" (so hard C and a few folks in the British Isles say it that way and perhaps all other English words ending in consonant + "ar" are pronounced that way)? (3) Like "Char" (the English word for a type of trout)? C/C++ programmers say (3), for example Bjarne Stroustrup of C++ fame says that's illogical but usual. However, people who have not come to SQL from another programming language may be more likely to go with (2), leading one online voter to exclaim "I've known a lot of people who say "car" though. (Generally SQL-y people; is this what they teach in DBA classes?)" and Tom Kyte of Oracle fame reportedly says "var-car" . DATA The Oxford English Dictionary (OED) shows 4 (four!) variations: "Brit. /ˈdeɪtə/, /ˈdɑːtə/, U.S. /ˈdædə/, /ˈdeɪdə/". It's only the first syllable that matters -- DAY or DA? I haven't seen the Longman Pronunciation Dictionary, but a blog post says the results of Longman's preference poll were: "BrE: deɪtə 92% ˈdɑːtə 6% ˈdætə 2% AmE: ˈdeɪțə 64%ˈdæțə 35% ˈdɑːțə 1%" (notice it's ț not t for our American friends). By the way OED says in a computing context it's "as a mass noun" so I guess "data is" is okay. GIF It's "jif", says its creator. GIGABYTE That letter at the start is a hard G; The "Jigabyte" pronunciation is unknown to Merriam-Webster, Cambridge, and Oxford dictionaries. GUI No question it's "gooey", for all the dictionaries I checked. So pronounce our product as "osselot-goey". GUID The author of "Essential COM" says The exact pronunciation of GUID is a subject of heated debate among COM developers. Although the COM specification says that GUID rhymes with fluid, the author [Don Box] believes that the COM specification is simply incorrect, citing the word languid as setting the precedent. The COM specification is a standard and therefore cannot be incorrect, but I can't find it, and I like setting-a-precedent games, so let's use the exact word Guid, eh? It appears in Hugh MacDiarmid's masterpiece "A Drunk Man Looks At The Thistle" But there are flegsome deeps Where the soul o'Scotland sleeps That I to bottom need To wauk Guid kens what deid .. which proves that Guid is a one-syllable word, though doubtless MacDiarmid pronounced it "Gweed". JSON Doug Crockford of Yahoo fame, seen on Youtube, says: So I discovered JAYsun. Java Script Object Notation. There's a lot of argument about how you pronounce that. I strictly don't care. I think probably the correct pronunciation is [switching to French] "je sens". The argument is mostly between people who say JAYsun and people who say JaySAWN. It's controversial, and in our non-JSON environment it's a foreign word, so spelling it out J S O N is safe and okay. QUERY In the 1600s the spelling was "quaery", so it must have rhymed with "very", and it still does, for some Americans. But the OED says that both American and British speakers say "QUEERie" nowadays. SCHEMA It's "Skema". The "Shema" pronunciation is unknown to Merriam-Webster, Cambridge, and Oxford dictionaries. SQL See the earlier post "How to pronounce SQL" which concluded: In the end, then, it's "when in Rome do as the Romans do". In Microsoft or Oracle contexts one should, like Mr Ellison, respect Microsoft's or Oracle's way of speaking. But here i[...]

Planet MySQL Pronouncing Database Terms It is the business of educated people to speak so that no-one may be able to tell in what county their childhood was passed. -- A. Burrell, A Handbook for Teachers in Public Elementary School, 1891 The terms that reveal where a person (mis)spent a DBMS-related childhood are "char", "data", "GIF", "gigabyte", "GUI", "JSON", "query", "schema", "tuple", "_", "`", and "«". CHAR (1) Like "Care" because it's short for "Character" (so hard C and most folks say "Character" that way)? (2) Like "Car" because it's short for "Character" (so hard C and a few folks in the British Isles say it that way and perhaps all other English words ending in consonant + "ar" are pronounced that way)? (3) Like "Char" (the English word for a type of trout)? C/C++ programmers say (3), for example Bjarne Stroustrup of C++ fame says that's illogical but usual. However, people who have not come to SQL from another programming language may be more likely to go with (2), leading one online voter to exclaim "I've known a lot of people who say "car" though. (Generally SQL-y people; is this what they teach in DBA classes?)" and Tom Kyte of Oracle fame reportedly says "var-car" . DATA The Oxford English Dictionary (OED) shows 4 (four!) variations: "Brit. /ˈdeɪtə/, /ˈdɑːtə/, U.S. /ˈdædə/, /ˈdeɪdə/". It's only the first syllable that matters -- DAY or DA? I haven't seen the Longman Pronunciation Dictionary, but a blog post says the results of Longman's preference poll were: "BrE: deɪtə 92% ˈdɑːtə 6% ˈdætə 2% AmE: ˈdeɪțə 64%ˈdæțə 35% ˈdɑːțə 1%" (notice it's ț not t for our American friends). By the way OED says in a computing context it's "as a mass noun" so I guess "data is" is okay. GIF It's "jif", says its creator. GIGABYTE That letter at the start is a hard G; The "Jigabyte" pronunciation is unknown to Merriam-Webster, Cambridge, and Oxford dictionaries. GUI No question it's "gooey", for all the dictionaries I checked. So pronounce our product as "osselot-goey". GUID The author of "Essential COM" says The exact pronunciation of GUID is a subject of heated debate among COM developers. Although the COM specification says that GUID rhymes with fluid, the author [Don Box] believes that the COM specification is simply incorrect, citing the word languid as setting the precedent. The COM specification is a standard and therefore cannot be incorrect, but I can't find it, and I like setting-a-precedent games, so let's use the exact word Guid, eh? It appears in Hugh MacDiarmid's masterpiece "A Drunk Man Looks At The Thistle" But there are flegsome deeps Where the soul o'Scotland sleeps That I to bottom need To wauk Guid kens what deid .. which proves that Guid is a one-syllable word, though doubtless MacDiarmid pronounced it "Gweed". JSON Doug Crockford of Yahoo fame, seen on Youtube, says: So I discovered JAYsun. Java Script Object Notation. There's a lot of argument about how you pronounce that. I strictly don't care. I think probably the correct pronunciation is [switching to French] "je sens". The argument is mostly between people who say JAYsun and people who say JaySAWN. It's controversial, and in our non-JSON environment it's a foreign word, so spelling it out J S O N is safe and okay. QUERY In the 1600s the spelling was "quaery", so it must have rhymed with "very", and it still does, for some Americans. But the OED says that both American and British speakers say "QUEERie" nowadays. SCHEMA It's "Skema". The "Shema" pronunciation is unknown to Merriam-Webster, Cambridge, and Oxford dictionaries. SQL See the earlier post "How to pronounce SQL" which concluded: In the end, then, it's "when in Rome do as the Romans do". In Microsoft or Oracle contexts one should, like Mr Ellison, respect Microsoft's or Oracle's way of speaking. But here i[...]
n open-source-DBMS-land the preference is to follow the standard. TUPLE See the earlier post "Tuples". It's "Tuhple". _ According to Swan's "Practical English Usage" the _ (Unicode code point 005F) character is more often called underline by Britons, more often called underscore by Americans. (The SQL-standard term is underscore.) (The Unicode term is LOW LINE; SPACING UNDERSCORE was the old Unicode-version-1.0 term.) ` ` This is a clue for telling if people have MySQL backgrounds -- they'll pronounce the ` (Unicode code point 0060) symbol as "backtick". Of course it also is found in other programming contexts nowadays, but there are lots of choices in the Jargon File: Common: backquote; left quote; left single quote; open quote; ; grave. Rare: Backprime; [backspark]; unapostrophe; birk; blugle; back tick; back glitch; push; ; quasiquote. By the way The Jargon File is a good source for such whimsical alternatives of ASCII names. « » You might be fooled by an Adobe error, as I was, into thinking that these French-quote-mark thingies are pronounced GEELmoes. Wrong. They are GEELmays. (The Unicode term is left-point or right-point double angle quotation marks.) This matter matters because, as Professor Higgins said, "The French don't care what they do actually, as long as they pronounce it properly." Meanwhile ... Enhancements made to the source code for the next version of ocelotgui, Ocelot's Graphical User Interface for MySQL and MariaDB, are: error messages are optionally in French, and grid output is optionally in HTML. As always, the description of the current version is on ocelot.ca and the downloadable source and releases are on github.

Planet MySQL Debian and MariaDB Server GNU/Linux distributions matter, and Debian is one of the most popular ones out there in terms of user base. Its an interesting time as MariaDB Server becomes more divergent compared to upstream MySQL, and people go about choosing default providers of the database. The MariaDB Server original goals were to be a drop-in replacement. In fact this is how its described (“It is an enhanced, drop-in replacement for MySQL”). We all know that its becoming increasingly hard for that line to be used these days. Anyhow in March 2016, Debian’s release team has made the decision that going forward, MariaDB Server is what people using Debian Stretch get, when they ask for MySQL (i.e. MariaDB Server is the default provider of an application that requires the use of port 3306, and provides a MySQL-like protocol). All this has brought some interesting bug reports and discussions, so here’s a collection of links that interest me (with decisions that will affect Debian users going forward). Connectors MySQL ODBC in Stretch – do follow the thread [debian-mysql] final decision about MySQL r-deps needed / cleaning up the MySQL mess – yes, the MySQL C++ connector is not the same as the MariaDB Connector/C. And let’s not forget the things that depend on the C++ connector, i.e. libreoffice-mysql-connector. Rene Engelhard started this excellent thread with questions that could do with answers. MariaDB Server Don’t include in stretch – bug#837615 – this is about how MariaDB Server 10.0 (note the version – this matters) should be included, but MySQL 5.6 shouldn’t be. MariaDB 10.1? – note that Otto Kekäläinen, CEO of the MariaDB Foundation, says the plan is to skip MariaDB Server 10.1 and go straight to MariaDB Server 10.2. As of this writing, MariaDB Server 10.2 is in its first beta released 27 Sep 2016, so are we expecting a few more betas before the release candidate? History shows there were four betas for 10.1 and one release candidate, while there were three betas and two release candidates of 10.0. There is no response here as to what is gained from skipping MariaDB Server 10.1, but one can guess that this has to do with support cycles. default-mysql-client forces removal of mysql-server* and mysql-client* – bug#842011 – bug reporter is a bit hostile towards the package team, but the gist is that “mariadb is NOT a drop-in replacement for mysql.” Users are bound to realise this once Debian Stretch gets more mainstream use. [debian-mysql] Bug#840855: Bug#840855: mysql-server: MySQL 5.7? – questioning what happens to MySQL 5.7, and this is really a call to action – if you disagree, email the security and release teams now not after Stretch is released! Quoting Clint Byrum, “The release and security teams have decided that MySQL will live only in unstable for stretch due to the perceived complications with tracking security patches in MySQL.” [debian-mysql] About packages that depend on mysql-* / mariadb / virtual-mysql-* – in where we find the API-incompatible libmysqlclient, naming conventions, and more.

Planet MySQL MySQL Connector/Python 2.2.2 m3 Development Release has been released MySQL Connector/Python 2.2.2 M3 is the third development release of the MySQL Connector Python 2.2 series. This series adds support for the new X DevAPI. The X DevAPI enables application developers to write code that combines the strengths of the relational and document models using a modern, NoSQL-like syntax that does not assume previous experience writing traditional SQL. To learn more about how to write applications using the X DevAPI, see http://dev.mysql.com/doc/x-devapi-userguide/en/index.html. For more information about how the X DevAPI is implemented in MySQL Connector/Python, and its usage, see http://dev.mysql.com/doc/dev/connector-python. Please note that the X DevAPI requires at least MySQL Server version 5.7.12 or higher with the X Plugin enabled. For general documentation about how to get started using MySQL as a document store, see http://dev.mysql.com/doc/refman/5.7/en/document-store.html. To download MySQL Connector/Python 2.2.2 M3, see the “Development Releases” tab at http://dev.mysql.com/downloads/connector/python/ We are working on the github upload, it should be available soon. Enjoy! Changes in MySQL Connector/Python 2.2.2 (2016-12-01, Milestone 3) * Functionality Added or Changed * Bugs Fixed Functionality Added or Changed * If the MySQL server is configured to support secure connections, Connector/Python now attempts to establish a secure (encrypted) connection by default, falling back to an unencrypted connection otherwise. This is behavior similar to the –ssl-mode=PREFERRED option supported by MySQL client programs. The following TLS/SSL options have been implemented for the mysqlx URI schema. All require Python 2.7.9 or higher. + ssl-enable: This option enforces SSL connections. If given, a connection attempt must be able to establish a secure connection or the attempt fails. + ssl-ca: This option is used to verify the server certificate. + ssl-cert, ssl-key: These options are used to pass the client certificate and key, but the server currently does not validate the client using these. The ssl-enable parameter can be specified in a parameter dictionary or URL, like this: mysqlx.get_session({“user”: “root”, “host”: “localhost”, “port”: 33060, “password”: “pass”, “ssl-enable”: True}) Or: mysqlx.get_session(“mysqlx://root:pass@localhost?ssl-enable”) The other parameters are used similarly. In a URL, path name values should be given within parentheses; for example, ssl-cert=(path_name). (Bug #24954646) * There is now a standard API to create a table: Schema objects have a create_table function. It throws an error if the table exists. * For any method that takes a value list of parameters for its argument, there is now more flexibility with how the parameters can be specified: Either as a value list or a list of individual parameters. For example, these method calls are the same: Collection.add([{“a”: 27}, {“a”: 28}]) Collection.add({“a”: 27}, {“a”: 28}) * For Schema objects, get_view, create_view, alter_view, and drop_view functions were added to support retrieval, create, alter, and drop operations on View objects. * Unix domain socket files are now supported as a connection transport. The socket file can be specified in a parameter dictionary or URL, like this: mysqlx.get_session({“user”: “root”, “password”: “pass”, “socket”: “/path/to/socket”}) Or: mysqlx.get_session(“mysqlx://user:pass@(/path/to/sock)/schema”) mysqlx.get_session(“mysqlx://user:pass@/path%2Fto%2Fsock/schema”) mysqlx.get_session(“mysqlx://user:pass@.%2Fpath%2Fto%2Fsock/schema”) mysqlx.get_session(“mysqlx://user:pass@..%2Fpath%2Fto%2Fsock/schema”) Bugs Fixed * For a user created with REQUIRE SSL, establishing an SSL connection by specifying[...]

Planet MySQL MySQL Connector/Python 2.2.2 m3 Development Release has been released MySQL Connector/Python 2.2.2 M3 is the third development release of the MySQL Connector Python 2.2 series. This series adds support for the new X DevAPI. The X DevAPI enables application developers to write code that combines the strengths of the relational and document models using a modern, NoSQL-like syntax that does not assume previous experience writing traditional SQL. To learn more about how to write applications using the X DevAPI, see http://dev.mysql.com/doc/x-devapi-userguide/en/index.html. For more information about how the X DevAPI is implemented in MySQL Connector/Python, and its usage, see http://dev.mysql.com/doc/dev/connector-python. Please note that the X DevAPI requires at least MySQL Server version 5.7.12 or higher with the X Plugin enabled. For general documentation about how to get started using MySQL as a document store, see http://dev.mysql.com/doc/refman/5.7/en/document-store.html. To download MySQL Connector/Python 2.2.2 M3, see the “Development Releases” tab at http://dev.mysql.com/downloads/connector/python/ We are working on the github upload, it should be available soon. Enjoy! Changes in MySQL Connector/Python 2.2.2 (2016-12-01, Milestone 3) * Functionality Added or Changed * Bugs Fixed Functionality Added or Changed * If the MySQL server is configured to support secure connections, Connector/Python now attempts to establish a secure (encrypted) connection by default, falling back to an unencrypted connection otherwise. This is behavior similar to the –ssl-mode=PREFERRED option supported by MySQL client programs. The following TLS/SSL options have been implemented for the mysqlx URI schema. All require Python 2.7.9 or higher. + ssl-enable: This option enforces SSL connections. If given, a connection attempt must be able to establish a secure connection or the attempt fails. + ssl-ca: This option is used to verify the server certificate. + ssl-cert, ssl-key: These options are used to pass the client certificate and key, but the server currently does not validate the client using these. The ssl-enable parameter can be specified in a parameter dictionary or URL, like this: mysqlx.get_session({“user”: “root”, “host”: “localhost”, “port”: 33060, “password”: “pass”, “ssl-enable”: True}) Or: mysqlx.get_session(“mysqlx://root:pass@localhost?ssl-enable”) The other parameters are used similarly. In a URL, path name values should be given within parentheses; for example, ssl-cert=(path_name). (Bug #24954646) * There is now a standard API to create a table: Schema objects have a create_table function. It throws an error if the table exists. * For any method that takes a value list of parameters for its argument, there is now more flexibility with how the parameters can be specified: Either as a value list or a list of individual parameters. For example, these method calls are the same: Collection.add([{“a”: 27}, {“a”: 28}]) Collection.add({“a”: 27}, {“a”: 28}) * For Schema objects, get_view, create_view, alter_view, and drop_view functions were added to support retrieval, create, alter, and drop operations on View objects. * Unix domain socket files are now supported as a connection transport. The socket file can be specified in a parameter dictionary or URL, like this: mysqlx.get_session({“user”: “root”, “password”: “pass”, “socket”: “/path/to/socket”}) Or: mysqlx.get_session(“mysqlx://user:pass@(/path/to/sock)/schema”) mysqlx.get_session(“mysqlx://user:pass@/path%2Fto%2Fsock/schema”) mysqlx.get_session(“mysqlx://user:pass@.%2Fpath%2Fto%2Fsock/schema”) mysqlx.get_session(“mysqlx://user:pass@..%2Fpath%2Fto%2Fsock/schema”) Bugs Fixed * For a user created with REQUIRE SSL, establishing an SSL connection by specifying[...]
–ssl-key but not –ssl-ca or –ssl-cert fails for standard MySQL client programs. The same connection configuration was (improperly) permitted in Connector/Python. (Bug #24953032) * Connection failures due to an improper SSL CA resulted in an uninformative error message. (Bug #24948054) * Using a schema object to alter a view failed if the view selected from a non-INFORMATION_SCHEMA table and it was altered to select from an INFORMATION_SCHEMA table. (Bug #24947078) * schema.create_collection() with an empty collection name threw an improper error. (Bug #24520850) Documentation ——————– Online:http://dev.mysql.com/doc/connector-python/en/index.html The source distribution includes the manual in various formats under the docs/ folder. Reporting Bugs ——————– We welcome and appreciate your feedback and bug reports:http://bugs.mysql.com/ On Behalf of the MySQL/ORACLE RE Team, Balasubramanian Kandasamy

Planet MySQL Database Daily Ops Series: GTID Replication and Binary Logs Purge This blog continues the ongoing series on daily operations and GTID replication. In this blog, I’m going to investigate why the error below has been appearing in a special environment I’ve been working with on the last few days:Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'The error provides the right message, and explains what is going on. But sometimes, it can be a bit tricky to solve this issue: you need additional information discovered after some tests and readings. We try and keep Managed Services scripted, in the sense that our advice and best practices are repeatable and consistent. However, some additional features and practices can be added depending on the customer situation. Some time ago one of our customer’s database servers presented the above message. At that point, we could see the binary log files in a compressed form on master (gzipped). Of course, MySQL can’t identify a compressed file with a .gz extension as a binary log. We uncompressed the file, but replication presented the same problem – even after uncompressing the file and making sure the UUID of the current master and the TRX_ID were there. Obviously, I needed to go and investigate the problem to see what was going on. After some reading, I re-read the below: When the server starts, the global value of gtid_purged, which was called before as gtid_lost, is initialized to the set of GTIDs contained by the Previous_gtid_log_event of the oldest binary log. When a binary log is purged, gtid_purged is re-read from the binary log that has now become the oldest one. => https://dev.mysql.com/doc/refman/5.6/en/replication-options-gtids.html#sysvar_gtid_purged That makes me think: if something is compressing binlogs on the master without purging them as expected by the GTID mechanism, it’s not going to be able to re-read existing GTIDs on disk. When the slave replication threads restarts, or the DBA issues commands like reset slave and reset master (to clean out the increased GTID sets on Executed_Gtid_Set from the SHOW SLAVE STATUS command, for example), this error can occur. But if I compress the file: Will the slave get lost and not find all the needed GTIDs on the master after a reset slave/reset master? If I purge the logs correctly, using PURGE BINARY LOGS, will the slave be OK when restarting replication threads? Test 1: Compressing the oldest binary log file on master, restarting slave threads I would like to test this very methodically. We’ll create one GTID per binary log, and then I will compress the oldest binary log file in order to make it unavailable for the slaves. I’m working with three virtual machines, one master and two slaves. On the second slave, I’m going to run the following sequence: stop slave, reset slave, reset master, start slave, and then, check the results. Let’s see what happens. On master (tool01):tool01 [(none)]:> show master logs; +-------------------+-----------+ | Log_name          | File_size | +-------------------+-----------+ | mysqld-bin.000001 |       341 | | mysqld-bin.000002 |       381 | | mysqld-bin.000003 |       333 | +-------------------+-----------+ 3 rows in set (0.00 sec) tool01 [(none)]:> show binlog events in 'mysqld-bin.000001'; +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ | Log_name          | Pos | Event_type     | Server_id | End_log_pos | Info                                                              [...]

Planet MySQL Database Daily Ops Series: GTID Replication and Binary Logs Purge This blog continues the ongoing series on daily operations and GTID replication. In this blog, I’m going to investigate why the error below has been appearing in a special environment I’ve been working with on the last few days:Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'The error provides the right message, and explains what is going on. But sometimes, it can be a bit tricky to solve this issue: you need additional information discovered after some tests and readings. We try and keep Managed Services scripted, in the sense that our advice and best practices are repeatable and consistent. However, some additional features and practices can be added depending on the customer situation. Some time ago one of our customer’s database servers presented the above message. At that point, we could see the binary log files in a compressed form on master (gzipped). Of course, MySQL can’t identify a compressed file with a .gz extension as a binary log. We uncompressed the file, but replication presented the same problem – even after uncompressing the file and making sure the UUID of the current master and the TRX_ID were there. Obviously, I needed to go and investigate the problem to see what was going on. After some reading, I re-read the below: When the server starts, the global value of gtid_purged, which was called before as gtid_lost, is initialized to the set of GTIDs contained by the Previous_gtid_log_event of the oldest binary log. When a binary log is purged, gtid_purged is re-read from the binary log that has now become the oldest one. => https://dev.mysql.com/doc/refman/5.6/en/replication-options-gtids.html#sysvar_gtid_purged That makes me think: if something is compressing binlogs on the master without purging them as expected by the GTID mechanism, it’s not going to be able to re-read existing GTIDs on disk. When the slave replication threads restarts, or the DBA issues commands like reset slave and reset master (to clean out the increased GTID sets on Executed_Gtid_Set from the SHOW SLAVE STATUS command, for example), this error can occur. But if I compress the file: Will the slave get lost and not find all the needed GTIDs on the master after a reset slave/reset master? If I purge the logs correctly, using PURGE BINARY LOGS, will the slave be OK when restarting replication threads? Test 1: Compressing the oldest binary log file on master, restarting slave threads I would like to test this very methodically. We’ll create one GTID per binary log, and then I will compress the oldest binary log file in order to make it unavailable for the slaves. I’m working with three virtual machines, one master and two slaves. On the second slave, I’m going to run the following sequence: stop slave, reset slave, reset master, start slave, and then, check the results. Let’s see what happens. On master (tool01):tool01 [(none)]:> show master logs; +-------------------+-----------+ | Log_name          | File_size | +-------------------+-----------+ | mysqld-bin.000001 |       341 | | mysqld-bin.000002 |       381 | | mysqld-bin.000003 |       333 | +-------------------+-----------+ 3 rows in set (0.00 sec) tool01 [(none)]:> show binlog events in 'mysqld-bin.000001'; +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ | Log_name          | Pos | Event_type     | Server_id | End_log_pos | Info                                                              [...]
| +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ | mysqld-bin.000001 |   4 | Format_desc    |         1 |         120 | Server ver: 5.6.32-log, Binlog ver: 4                             | | mysqld-bin.000001 | 120 | Previous_gtids |         1 |         151 |                                                                   | | mysqld-bin.000001 | 151 | Gtid           |         1 |         199 | SET @@SESSION.GTID_NEXT= '4fbe2d57-5843-11e6-9268-0800274fb806:1' | | mysqld-bin.000001 | 199 | Query          |         1 |         293 | create database wb01                                              | | mysqld-bin.000001 | 293 | Rotate         |         1 |         341 | mysqld-bin.000002;pos=4                                           | +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ 5 rows in set (0.00 sec) tool01 [(none)]:> show binlog events in 'mysqld-bin.000002'; +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ | Log_name          | Pos | Event_type     | Server_id | End_log_pos | Info                                                              | +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ | mysqld-bin.000002 |   4 | Format_desc    |         1 |         120 | Server ver: 5.6.32-log, Binlog ver: 4                             | | mysqld-bin.000002 | 120 | Previous_gtids |         1 |         191 | 4fbe2d57-5843-11e6-9268-0800274fb806:1                            | | mysqld-bin.000002 | 191 | Gtid           |         1 |         239 | SET @@SESSION.GTID_NEXT= '4fbe2d57-5843-11e6-9268-0800274fb806:2' | | mysqld-bin.000002 | 239 | Query          |         1 |         333 | create database wb02                                              | | mysqld-bin.000002 | 333 | Rotate         |         1 |         381 | mysqld-bin.000003;pos=4                                           | +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ 5 rows in set (0.00 sec) tool01 [(none)]:> show binlog events in 'mysqld-bin.000003'; +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ | Log_name          | Pos | Event_type     | Server_id | End_log_pos | Info                                                              | +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ | mysqld-bin.000003 |   4 | Format_desc    |         1 |         120 | Server ver: 5.6.32-log, Binlog ver: 4                             | | mysqld-bin.000003 | 120 | Previous_gtids |         1 |         191 | 4fbe2d57-5843-11e6-9268-0800274fb806:1-2                          | | mysqld-bin.000003 | 191 | Gtid           |         1 |         239 | SET @@SESSION.GTID_NEXT= '4fbe2d57-5843-11e6-9268-0800274fb806:3' | | mysqld-bin.000003 | 239 | Query          |         1 |         333 | create database wb03                                              | +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ 4 rows in set (0.00 sec)Here we see that each existing binary log file has just one transaction. That will make it easier to compress the oldest binary log, and then disappear with part of the existing GTIDs. When the slave connects to a master, it will first send all the Executed_Gtid_Set, and then the master sends all the missing IDs to the slave. As Stephane Combaudon said, we will fo[...]

| +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ | mysqld-bin.000001 |   4 | Format_desc    |         1 |         120 | Server ver: 5.6.32-log, Binlog ver: 4                             | | mysqld-bin.000001 | 120 | Previous_gtids |         1 |         151 |                                                                   | | mysqld-bin.000001 | 151 | Gtid           |         1 |         199 | SET @@SESSION.GTID_NEXT= '4fbe2d57-5843-11e6-9268-0800274fb806:1' | | mysqld-bin.000001 | 199 | Query          |         1 |         293 | create database wb01                                              | | mysqld-bin.000001 | 293 | Rotate         |         1 |         341 | mysqld-bin.000002;pos=4                                           | +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ 5 rows in set (0.00 sec) tool01 [(none)]:> show binlog events in 'mysqld-bin.000002'; +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ | Log_name          | Pos | Event_type     | Server_id | End_log_pos | Info                                                              | +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ | mysqld-bin.000002 |   4 | Format_desc    |         1 |         120 | Server ver: 5.6.32-log, Binlog ver: 4                             | | mysqld-bin.000002 | 120 | Previous_gtids |         1 |         191 | 4fbe2d57-5843-11e6-9268-0800274fb806:1                            | | mysqld-bin.000002 | 191 | Gtid           |         1 |         239 | SET @@SESSION.GTID_NEXT= '4fbe2d57-5843-11e6-9268-0800274fb806:2' | | mysqld-bin.000002 | 239 | Query          |         1 |         333 | create database wb02                                              | | mysqld-bin.000002 | 333 | Rotate         |         1 |         381 | mysqld-bin.000003;pos=4                                           | +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ 5 rows in set (0.00 sec) tool01 [(none)]:> show binlog events in 'mysqld-bin.000003'; +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ | Log_name          | Pos | Event_type     | Server_id | End_log_pos | Info                                                              | +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ | mysqld-bin.000003 |   4 | Format_desc    |         1 |         120 | Server ver: 5.6.32-log, Binlog ver: 4                             | | mysqld-bin.000003 | 120 | Previous_gtids |         1 |         191 | 4fbe2d57-5843-11e6-9268-0800274fb806:1-2                          | | mysqld-bin.000003 | 191 | Gtid           |         1 |         239 | SET @@SESSION.GTID_NEXT= '4fbe2d57-5843-11e6-9268-0800274fb806:3' | | mysqld-bin.000003 | 239 | Query          |         1 |         333 | create database wb03                                              | +-------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+ 4 rows in set (0.00 sec)Here we see that each existing binary log file has just one transaction. That will make it easier to compress the oldest binary log, and then disappear with part of the existing GTIDs. When the slave connects to a master, it will first send all the Executed_Gtid_Set, and then the master sends all the missing IDs to the slave. As Stephane Combaudon said, we will fo[...]
rce it to happen! Slave database servers are both currently in the same position:tool02 [(none)]:> show slave statusG *************************** 1. row ***************************                Slave_IO_State: Waiting for master to send event                   Master_Host: 192.168.0.10                   Master_User: repl                   Master_Port: 3306                 Connect_Retry: 60               Master_Log_File: mysqld-bin.000003           Read_Master_Log_Pos: 333                Relay_Log_File: mysqld-relay-bin.000006                 Relay_Log_Pos: 545         Relay_Master_Log_File: mysqld-bin.000003              Slave_IO_Running: Yes             Slave_SQL_Running: Yes             ...            Retrieved_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3             Executed_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3              tool03 [(none)]:> show slave statusG *************************** 1. row ***************************                Slave_IO_State: Waiting for master to send event                   Master_Host: 192.168.0.10                   Master_User: repl                   Master_Port: 3306                 Connect_Retry: 60               Master_Log_File: mysqld-bin.000003           Read_Master_Log_Pos: 333                Relay_Log_File: mysqld-relay-bin.000008                 Relay_Log_Pos: 451         Relay_Master_Log_File: mysqld-bin.000003              Slave_IO_Running: Yes             Slave_SQL_Running: Yes ...            Retrieved_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3             Executed_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3Now, we’ll compress the oldest binary log on master:[root@tool01 mysql]# ls -lh | grep mysqld-bin. -rw-rw---- 1 mysql mysql  262 Nov 11 13:55 mysqld-bin.000001.gz #: this is the file containing 4fbe2d57-5843-11e6-9268-0800274fb806:1 -rw-rw---- 1 mysql mysql  381 Nov 11 13:55 mysqld-bin.000002 -rw-rw---- 1 mysql mysql  333 Nov 11 13:55 mysqld-bin.000003 -rw-rw---- 1 mysql mysql   60 Nov 11 13:55 mysqld-bin.indexOn tool03, which is the database server that will be used, we will execute the replication reload:tool03 [(none)]:> stop slave; reset slave; reset master; start slave; Query OK, 0 rows affected (0.01 sec) Query OK, 0 rows affected (0.03 sec) Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.02 sec) tool03 [(none)]:> show slave statusG *************************** 1. row ***************************                Slave_IO_State:                   Master_Host: 192.168.0.10                   Master_User: repl                   Master_Port: 3306                 Connect_Retry: 60               Master_Log_File:           Read_Master_Log_Pos: 4                Relay_Log_File: mysqld-relay-bin.000002                 Relay_Log_Pos: 4         Relay_Master_Log_File:              Slave_IO_Running: No             Slave_SQL_Running: Yes               Replicate_Do_DB:           Replicate_Ignore_DB:            Replicate_Do_Table:        Replicate_Ignore_Table:       Replicate_Wild_Do_Table:   Replicate_Wild_Ignore_Table:                    Last_Errno: 0                    Last_Error:                  Skip_Counter: 0           Exec_Master_Log_Pos: 0               Relay_Log_Space: 151               Until_Condition: None                Until_Log_File:                 Until_Log_Pos: 0            Master_SSL_Allowed: No            Master_SSL_CA_File:            Master_SSL_CA_Path:               Master_SSL_Cert:             Master_SSL_Cipher:                Master_SSL_Key:         Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No                 Last_IO_Errno: 1236                 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'                Last_SQL_E[...]

rce it to happen! Slave database servers are both currently in the same position:tool02 [(none)]:> show slave statusG *************************** 1. row ***************************                Slave_IO_State: Waiting for master to send event                   Master_Host: 192.168.0.10                   Master_User: repl                   Master_Port: 3306                 Connect_Retry: 60               Master_Log_File: mysqld-bin.000003           Read_Master_Log_Pos: 333                Relay_Log_File: mysqld-relay-bin.000006                 Relay_Log_Pos: 545         Relay_Master_Log_File: mysqld-bin.000003              Slave_IO_Running: Yes             Slave_SQL_Running: Yes             ...            Retrieved_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3             Executed_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3              tool03 [(none)]:> show slave statusG *************************** 1. row ***************************                Slave_IO_State: Waiting for master to send event                   Master_Host: 192.168.0.10                   Master_User: repl                   Master_Port: 3306                 Connect_Retry: 60               Master_Log_File: mysqld-bin.000003           Read_Master_Log_Pos: 333                Relay_Log_File: mysqld-relay-bin.000008                 Relay_Log_Pos: 451         Relay_Master_Log_File: mysqld-bin.000003              Slave_IO_Running: Yes             Slave_SQL_Running: Yes ...            Retrieved_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3             Executed_Gtid_Set: 4fbe2d57-5843-11e6-9268-0800274fb806:1-3Now, we’ll compress the oldest binary log on master:[root@tool01 mysql]# ls -lh | grep mysqld-bin. -rw-rw---- 1 mysql mysql  262 Nov 11 13:55 mysqld-bin.000001.gz #: this is the file containing 4fbe2d57-5843-11e6-9268-0800274fb806:1 -rw-rw---- 1 mysql mysql  381 Nov 11 13:55 mysqld-bin.000002 -rw-rw---- 1 mysql mysql  333 Nov 11 13:55 mysqld-bin.000003 -rw-rw---- 1 mysql mysql   60 Nov 11 13:55 mysqld-bin.indexOn tool03, which is the database server that will be used, we will execute the replication reload:tool03 [(none)]:> stop slave; reset slave; reset master; start slave; Query OK, 0 rows affected (0.01 sec) Query OK, 0 rows affected (0.03 sec) Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.02 sec) tool03 [(none)]:> show slave statusG *************************** 1. row ***************************                Slave_IO_State:                   Master_Host: 192.168.0.10                   Master_User: repl                   Master_Port: 3306                 Connect_Retry: 60               Master_Log_File:           Read_Master_Log_Pos: 4                Relay_Log_File: mysqld-relay-bin.000002                 Relay_Log_Pos: 4         Relay_Master_Log_File:              Slave_IO_Running: No             Slave_SQL_Running: Yes               Replicate_Do_DB:           Replicate_Ignore_DB:            Replicate_Do_Table:        Replicate_Ignore_Table:       Replicate_Wild_Do_Table:   Replicate_Wild_Ignore_Table:                    Last_Errno: 0                    Last_Error:                  Skip_Counter: 0           Exec_Master_Log_Pos: 0               Relay_Log_Space: 151               Until_Condition: None                Until_Log_File:                 Until_Log_Pos: 0            Master_SSL_Allowed: No            Master_SSL_CA_File:            Master_SSL_CA_Path:               Master_SSL_Cert:             Master_SSL_Cipher:                Master_SSL_Key:         Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No                 Last_IO_Errno: 1236                 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'                Last_SQL_E[...]
rrno: 0                Last_SQL_Error:   Replicate_Ignore_Server_Ids:              Master_Server_Id: 1                   Master_UUID: 4fbe2d57-5843-11e6-9268-0800274fb806              Master_Info_File: /var/lib/mysql/master.info                     SQL_Delay: 0           SQL_Remaining_Delay: NULL       Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it            Master_Retry_Count: 86400                   Master_Bind:       Last_IO_Error_Timestamp: 161111 14:47:13      Last_SQL_Error_Timestamp:                Master_SSL_Crl:            Master_SSL_Crlpath:            Retrieved_Gtid_Set:             Executed_Gtid_Set:                 Auto_Position: 1 1 row in set (0.00 sec)Bingo! We broke the replication streaming on the slave. Now we know that the missing GTID on the master was due to the compressed file, and wasn’t able to be passed along to the connecting slave during their negotiation. Additionally, @@GTID_PURGED was not reloaded as per what the online manual said. The test done and we confirmed the theory (if you have additional comments, enter it at the end of the blog). Test 2: Purge the oldest file on master and reload replication on slave Let’s make it as straightforward as possible. The purge can be done manually using the PURGE BINARY LOGS command to get it done a proper way as the binary log index file should be considered a part of this purge operation as well (it should be edited to remove the file name index entry together with the log file on disk). I’m going to execute the same as before, but include purging the file manually with the mentioned command.tool01 [(none)]:> show master logs; +-------------------+-----------+ | Log_name | File_size | +-------------------+-----------+ | mysqld-bin.000001 | 341 | | mysqld-bin.000002 | 381 | | mysqld-bin.000003 | 333 | +-------------------+-----------+ 3 rows in set (0.00 sec) tool01 [(none)]:> purge binary logs to 'mysqld-bin.000002'; Query OK, 0 rows affected (0.01 sec) tool01 [(none)]:> show master logs; +-------------------+-----------+ | Log_name | File_size | +-------------------+-----------+ | mysqld-bin.000002 | 381 | | mysqld-bin.000003 | 333 | +-------------------+-----------+ 2 rows in set (0.00 sec)Now, we’ll execute the commands to check how it goes:tool03 [(none)]:> stop slave; reset slave; reset master; start slave; Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.02 sec) Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.02 sec) tool03 [(none)]:> show slave statusG *************************** 1. row *************************** Slave_IO_State: Master_Host: 192.168.0.10 Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: Read_Master_Log_Pos: 4 Relay_Log_File: mysqld-relay-bin.000002 Relay_Log_Pos: 4 Relay_Master_Log_File: Slave_IO_Running: No Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 0 Relay_Log_Space: 151 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary lo[...]

rrno: 0                Last_SQL_Error:   Replicate_Ignore_Server_Ids:              Master_Server_Id: 1                   Master_UUID: 4fbe2d57-5843-11e6-9268-0800274fb806              Master_Info_File: /var/lib/mysql/master.info                     SQL_Delay: 0           SQL_Remaining_Delay: NULL       Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it            Master_Retry_Count: 86400                   Master_Bind:       Last_IO_Error_Timestamp: 161111 14:47:13      Last_SQL_Error_Timestamp:                Master_SSL_Crl:            Master_SSL_Crlpath:            Retrieved_Gtid_Set:             Executed_Gtid_Set:                 Auto_Position: 1 1 row in set (0.00 sec)Bingo! We broke the replication streaming on the slave. Now we know that the missing GTID on the master was due to the compressed file, and wasn’t able to be passed along to the connecting slave during their negotiation. Additionally, @@GTID_PURGED was not reloaded as per what the online manual said. The test done and we confirmed the theory (if you have additional comments, enter it at the end of the blog). Test 2: Purge the oldest file on master and reload replication on slave Let’s make it as straightforward as possible. The purge can be done manually using the PURGE BINARY LOGS command to get it done a proper way as the binary log index file should be considered a part of this purge operation as well (it should be edited to remove the file name index entry together with the log file on disk). I’m going to execute the same as before, but include purging the file manually with the mentioned command.tool01 [(none)]:> show master logs; +-------------------+-----------+ | Log_name | File_size | +-------------------+-----------+ | mysqld-bin.000001 | 341 | | mysqld-bin.000002 | 381 | | mysqld-bin.000003 | 333 | +-------------------+-----------+ 3 rows in set (0.00 sec) tool01 [(none)]:> purge binary logs to 'mysqld-bin.000002'; Query OK, 0 rows affected (0.01 sec) tool01 [(none)]:> show master logs; +-------------------+-----------+ | Log_name | File_size | +-------------------+-----------+ | mysqld-bin.000002 | 381 | | mysqld-bin.000003 | 333 | +-------------------+-----------+ 2 rows in set (0.00 sec)Now, we’ll execute the commands to check how it goes:tool03 [(none)]:> stop slave; reset slave; reset master; start slave; Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.02 sec) Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.02 sec) tool03 [(none)]:> show slave statusG *************************** 1. row *************************** Slave_IO_State: Master_Host: 192.168.0.10 Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: Read_Master_Log_Pos: 4 Relay_Log_File: mysqld-relay-bin.000002 Relay_Log_Pos: 4 Relay_Master_Log_File: Slave_IO_Running: No Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 0 Relay_Log_Space: 151 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary lo[...]
g: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.' Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 Master_UUID: 4fbe2d57-5843-11e6-9268-0800274fb806 Master_Info_File: /var/lib/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: 161111 16:35:02 Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 1 1 row in set (0.00 sec)The GTID on the purged file is needed by the slave. In both cases, we can set the @@GTID_PURGED as below with the transaction that we know was purged, and move forward with replication:tool03 [(none)]:> stop slave; set global gtid_purged='4fbe2d57-5843-11e6-9268-0800274fb806:1'; Query OK, 0 rows affected, 1 warning (0.00 sec) Query OK, 0 rows affected (0.01 sec) tool03 [(none)]:> start slave; Query OK, 0 rows affected (0.01 sec)The above adjusts the GTID on @@GTID_PURGED to just request the existing GTIDs, using the oldest existing GTID minus one to make the slave start the replication from the oldest existing GTID. In our scenario above, the replica restarts replication from 4fbe2d57-5843-11e6-9268-0800274fb806:2, which lives on binary log file mysqld-bin.000002. Replication is fixed, as its threads can restart processing the data streaming coming from master. You will need to execute additional steps in checksum and sync for the set of transactions that were jumped when setting a new value for @@GTID_PURGED. If replication continues to break after restarting, I advise you rebuild the slave (possibly the subject of future blog). Good explanations about this can be found on the below bug, reported by the Facebook guys and Laurynas Biveinis, the Percona Server Lead (who clarified the issue): MySQL Bugs: #72635: Data inconsistencies when master has truncated binary log with GTID after crash; MySQL Bugs: #73032: Setting gtid_purged may break auto_position and thus slaves; Conclusion Be careful when purging or doing something manually with binary logs, because @@GTID_PURGED needs to be automatically updated when binary logs are purged. It seems to happen only when expire_logs_days is set to purge binary logs. Yet you need to be careful when trusting this variable, because it doesn’t consider fraction of days, depending the number of writes on a database server, it can get disks full in minutes. This blog showed that even housekeeping scripts and the PURGER BINARY LOGS command were able to make it happen.

Planet MySQL MySQL 8.0: Improving the Test Framework to avoid skipping tests In MySQL 8.0 we have improved the MySQL Test Framework (MTR) by adding a new no-skip option that avoids skipping tests in regression test runs. Before describing this feature in more detail, let me start with an introduction. Introduction In MTR, we have an existing command called skip.…

Planet MySQL MySQL 8.0: MTR Configurations to Be Set to Server Defaults Where Possible MySQL Test Run or MTR for short, is a MySQL test program. It was developed to ensure that the MySQL server’s operation is as expected whether it be in terms of testing the functionality of new features or integrity of the old.…

Planet MySQL Make MySQL 8.0 Better Through Better Benchmarking This blog post discusses how better MySQL 8.0 benchmarks can improve MySQL in general. Like many in MySQL community, I’m very excited about what MySQL 8.0 offers. There are a lot of great features and architecture improvements. Also like many in the MySQL community, I would like to see MySQL 8.0 perform better. Better performance is what we always want (and expect) from new database software releases. Rarely do performance improvements happen by accident – they require running benchmarks, finding bottlenecks and eliminating them. This is the area where I think things could use improvement. If you come to the MySQL Keynote at Oracle OpenWorld, or if you go to MySQL Benchmarks Page, you find a very limited set of benchmarks. They mostly focus around sysbench performance, with large numbers of connections and large numbers of cores. I’m not convinced this effort is the best use of our time these days. Don’t get me wrong: as one of the original designers of sysbench, it is a great and simple tool that helps spot many bottlenecks. I still use it to find performance issues. But it is only one tool, which is by no means provides full coverage of real-world MySQL workloads. I agree with Mark Callaghan (see discussion here): we need to run more benchmarks using a wider set of circumstances, to ensure there are no regressions in new releases. This will help move MySQL performance forward for real users.       Here are some specific ideas on how I think we could benchmark MySQL 8.0 better: Focus on production-recommended settings. Way too often we see benchmarks run with the doublewrite buffer disabled, InnoDB checksum disabled and no binary log (like in this benchmark). While they produce some excitingly high numbers, they have little practical value for real workloads. At very least I would very much like to see separate numbers for the “street legal car,” versus one designed to set a record on the salt flats. Go beyond sysbench. Sysbench focuses on PK-only based access for very simple tables, and does not even do JOINs as part of its workload. I would like to see more benchmarks that have tables with many indexes, using secondary key lookups and joins, involving rows with many fields, and medium and large size blobs that are common bottlenecks. We also need more database features covered. Are foreign keys or triggers getting faster or slower? What about stored procedure execution performance? I would love to see these get covered. Mark Callaghan suggests LinkBench, which I think is a fine benchmark to add, but it shouldn’t be the only one. Do more with sysbench. Sysbench could get more done and cover more important workloads. Workloads with data fitting in memory and not fitting in memory should be shown separately. Testing performance with large numbers of tables is also very important – many MySQL installations for SaaS applications run tens of thousands of tables (sometimes going into millions). I would also suggest running more injection benchmarks with sysbench, as they are more representative of the real world. Look at latency, not just throughput. The benchmarks results we commonly see are focused on the throughput over a long period of time, without looking at the latency and how performance changes over time. Stalls and performance dips are well known in the MySQL space – especially the famous InnoDB checkpointing woes (though this issue has gotten a lot better) There are other cases and circumstances where stalls and slowdowns can happen.   Measure resource consumption. I very much like how Mark Callaghan shows the CPU usage and IO usage per transaction/operation, so we can get a better idea of efficiency. Concurrency. Recently, the focus has been on very hi[...]

Planet MySQL Make MySQL 8.0 Better Through Better Benchmarking This blog post discusses how better MySQL 8.0 benchmarks can improve MySQL in general. Like many in MySQL community, I’m very excited about what MySQL 8.0 offers. There are a lot of great features and architecture improvements. Also like many in the MySQL community, I would like to see MySQL 8.0 perform better. Better performance is what we always want (and expect) from new database software releases. Rarely do performance improvements happen by accident – they require running benchmarks, finding bottlenecks and eliminating them. This is the area where I think things could use improvement. If you come to the MySQL Keynote at Oracle OpenWorld, or if you go to MySQL Benchmarks Page, you find a very limited set of benchmarks. They mostly focus around sysbench performance, with large numbers of connections and large numbers of cores. I’m not convinced this effort is the best use of our time these days. Don’t get me wrong: as one of the original designers of sysbench, it is a great and simple tool that helps spot many bottlenecks. I still use it to find performance issues. But it is only one tool, which is by no means provides full coverage of real-world MySQL workloads. I agree with Mark Callaghan (see discussion here): we need to run more benchmarks using a wider set of circumstances, to ensure there are no regressions in new releases. This will help move MySQL performance forward for real users.       Here are some specific ideas on how I think we could benchmark MySQL 8.0 better: Focus on production-recommended settings. Way too often we see benchmarks run with the doublewrite buffer disabled, InnoDB checksum disabled and no binary log (like in this benchmark). While they produce some excitingly high numbers, they have little practical value for real workloads. At very least I would very much like to see separate numbers for the “street legal car,” versus one designed to set a record on the salt flats. Go beyond sysbench. Sysbench focuses on PK-only based access for very simple tables, and does not even do JOINs as part of its workload. I would like to see more benchmarks that have tables with many indexes, using secondary key lookups and joins, involving rows with many fields, and medium and large size blobs that are common bottlenecks. We also need more database features covered. Are foreign keys or triggers getting faster or slower? What about stored procedure execution performance? I would love to see these get covered. Mark Callaghan suggests LinkBench, which I think is a fine benchmark to add, but it shouldn’t be the only one. Do more with sysbench. Sysbench could get more done and cover more important workloads. Workloads with data fitting in memory and not fitting in memory should be shown separately. Testing performance with large numbers of tables is also very important – many MySQL installations for SaaS applications run tens of thousands of tables (sometimes going into millions). I would also suggest running more injection benchmarks with sysbench, as they are more representative of the real world. Look at latency, not just throughput. The benchmarks results we commonly see are focused on the throughput over a long period of time, without looking at the latency and how performance changes over time. Stalls and performance dips are well known in the MySQL space – especially the famous InnoDB checkpointing woes (though this issue has gotten a lot better) There are other cases and circumstances where stalls and slowdowns can happen.   Measure resource consumption. I very much like how Mark Callaghan shows the CPU usage and IO usage per transaction/operation, so we can get a better idea of efficiency. Concurrency. Recently, the focus has been on very hi[...]
gh concurrency in terms of connections and active connections, typically on very big iron (using as many as 72 cores). And as much as this is important to “future-proofing” MySQL as we get more and more cores per socket every year, it should not be the only focus. In fact, it is extremely rare for me to see sustained loads of more than 20-40  “threads running” for well-configured systems. With modern solutions like ProxySQL, you can restrict concurrency to the most optimal levels for your server through multiplexing. Not to mention the thread pool, which is available in MySQL Enterprise, Percona Server and MariaDB. I would like to see a much more focused benchmark at medium-to-low concurrency. The fact that single thread performance has gotten slower in every Major MySQL version is not a good thing. As MySQL currently runs a single query in a single thread, it impacts query latencies in many real-world situations. Virtualization. We need more benchmarks in virtualized environments, as virtualization and the cloud are where most workloads are these days (by number). Yes, big iron and bare metal are where you get the best performance, but it’s not where most users are running MySQL. Whenever you are looking at full blown virtualization or containers, the performance profile can be substantially different from bare metal. Virtualized instances often have smaller CPU cores – getting the best performance with 8-16 virtual cores might be a more relevant data set for many than the performance with 100+ cores. SSL and encryption. MySQL 5.7 was all about security. We’re supposed to be able to enable SSL easily, but was any work done on making it cheap? The benchmark Ernie Souhrada did a few years back showed a pretty high overhead (in MySQL 5.6). We need more focus on SSL performance, and getting it would allow more people to run MySQL with SSL. I would also love to see more benchmarks with encryption enabled, to understand better how much it costs to have your data encrypted “at rest,” and in what cases. Protocol X and MySQL Doc Store. These were added after MySQL 5.7 GA, so it would be unfair to complain about the lack of benchmarks comparing the performance of those versus previous versions. But if Protocol X is the future, some benchmarks are in order. It would be great to have official numbers on the amount of overhead using MySQL Doc Store has compared to SQL (especially since we know that queries are converted to SQL for execution). Replication benchmarks. There are a lot of great replication features in newer MySQL versions: statement/row/mixed, GTID or no GTID, chose multiple formats for row events, enable various forms of semi-sync replication, two ways of parallel replication and multi-source replication. Additionally, MySQL group replication is on the way. There seems to be very little comprehensive benchmarks for these features, however. We really need to understand how they scale and perform under various workloads. Mixed workloads.  Perhaps one of the biggest differences between benchmarks and real production environments is that in benchmarks the same workload often is used over and over, while in the real world there is a mix of “application workloads.” The real world also has additional tasks such as backups, reporting or running “online” ALTER TABLE operations. Practical performance is performance you can count on while also serving these types of background activities. Sometimes you can get a big surprise from the severity of impact from such background activities. Compression benchmarks. There have been some InnoDB compression benchmarks (both for new and old methods), but they are a completely separate set of benchmarks that are hard to put in context with everything else. For example, do they scale well with high numbers of connections and large numbers of cores? Long-running benchmarks. A lot of the benchmarks[...]

gh concurrency in terms of connections and active connections, typically on very big iron (using as many as 72 cores). And as much as this is important to “future-proofing” MySQL as we get more and more cores per socket every year, it should not be the only focus. In fact, it is extremely rare for me to see sustained loads of more than 20-40  “threads running” for well-configured systems. With modern solutions like ProxySQL, you can restrict concurrency to the most optimal levels for your server through multiplexing. Not to mention the thread pool, which is available in MySQL Enterprise, Percona Server and MariaDB. I would like to see a much more focused benchmark at medium-to-low concurrency. The fact that single thread performance has gotten slower in every Major MySQL version is not a good thing. As MySQL currently runs a single query in a single thread, it impacts query latencies in many real-world situations. Virtualization. We need more benchmarks in virtualized environments, as virtualization and the cloud are where most workloads are these days (by number). Yes, big iron and bare metal are where you get the best performance, but it’s not where most users are running MySQL. Whenever you are looking at full blown virtualization or containers, the performance profile can be substantially different from bare metal. Virtualized instances often have smaller CPU cores – getting the best performance with 8-16 virtual cores might be a more relevant data set for many than the performance with 100+ cores. SSL and encryption. MySQL 5.7 was all about security. We’re supposed to be able to enable SSL easily, but was any work done on making it cheap? The benchmark Ernie Souhrada did a few years back showed a pretty high overhead (in MySQL 5.6). We need more focus on SSL performance, and getting it would allow more people to run MySQL with SSL. I would also love to see more benchmarks with encryption enabled, to understand better how much it costs to have your data encrypted “at rest,” and in what cases. Protocol X and MySQL Doc Store. These were added after MySQL 5.7 GA, so it would be unfair to complain about the lack of benchmarks comparing the performance of those versus previous versions. But if Protocol X is the future, some benchmarks are in order. It would be great to have official numbers on the amount of overhead using MySQL Doc Store has compared to SQL (especially since we know that queries are converted to SQL for execution). Replication benchmarks. There are a lot of great replication features in newer MySQL versions: statement/row/mixed, GTID or no GTID, chose multiple formats for row events, enable various forms of semi-sync replication, two ways of parallel replication and multi-source replication. Additionally, MySQL group replication is on the way. There seems to be very little comprehensive benchmarks for these features, however. We really need to understand how they scale and perform under various workloads. Mixed workloads.  Perhaps one of the biggest differences between benchmarks and real production environments is that in benchmarks the same workload often is used over and over, while in the real world there is a mix of “application workloads.” The real world also has additional tasks such as backups, reporting or running “online” ALTER TABLE operations. Practical performance is performance you can count on while also serving these types of background activities. Sometimes you can get a big surprise from the severity of impact from such background activities. Compression benchmarks. There have been some InnoDB compression benchmarks (both for new and old methods), but they are a completely separate set of benchmarks that are hard to put in context with everything else. For example, do they scale well with high numbers of connections and large numbers of cores? Long-running benchmarks. A lot of the benchmarks[...]
run are rather short. Many of the things that affect performance take time to accumulate: memory fragmentation on the process (or OS Kernel) side, disk fragmentation and database fragmentation. For a database that is expected to run many months without restarting, it would be great to see some benchmark runs that last several days/weeks to check long term stability, or if there is a regression or resource leak. Complex queries. While MySQL is not an analytical database, it would still be possible to run complex queries with JOINs while the MySQL optimizer team provides constant improvements to the optimizer. It would be quite valuable to see how optimizer improvements affect query execution. We want to see how these improvements affect scalability with hardware and concurrency as well.. These are just some of ideas on what could be done. Of course, there are only so many things the performance engineering team can focus at the time: one can’t boil the ocean! My main suggestion is this: we have done enough deep optimizing of primary key lookups with sysbench on high concurrency and monster hardware, and it’s time to go wider. This ensures that MySQL doesn’t falter with poor performance on commonly run workloads. Benchmarks like these have much more practical value than beating one million primary key selects a second on a single server.

Planet MySQL Business Continuity and MySQL Backups This blog post discusses the business continuity plan around MySQL backups, and how organizations should think about them. During the years I’ve worked in IT, I’ve learned that backups sometimes are a conceptual subject in organizations. Many companies have them, but don’t document the associated business continuation plan for them. I experienced this the hard way many many years ago, somewhere around when MySQL 5.0 was still widely used. In most organizations, there are a couple of business continuity subjects that should be described internally. For example, what is the recovery time objective and what is the recovery point objective. Let’s go a bit deeper into both concepts: Recovery Point Objective: A recovery point objective describes the utter limit of time data can be lost during a major incident. For example, recovery while a massive data center failure happens. One of the questions you should ask prior to these situations is what is a tolerable time point for lost information?  If you have a recovery point objective of over a day, your daily backup routines might cover this. However, if you have a recovery point objective that is more stringent, you might be forced to have some additional tools like binary streaming or incremental backup. Recovery Time Objective This second term and concept is also essential in building a business continuity plan. Your environment has to remain active to generate traffic and, potentially, revenue. What are the requirements promised to your customers? Are there any SLA’s described with the customer, or is it best effort? If it’s best effort, what would be the tipping point for your users to start using an alternative service from your competitor. These are all factors to consider while determining your RTO. In Short If the recovery point objective and recovery time objective are stringent, this might mean additional costs might be required when buying hardware, or perhaps having a secondary data center becomes mandatory. However, it’s a cost/value discussion: what makes your company lose revenue, and what is acceptable during a crisis? Based on your business continuity requirements, you can potentially build your DR plans. Make sure your business continuity requirements builds the DR plan, and not vice versa. What tools do you have at your disposal to create sensible MySQL backups? Logical backups MySQLdump. Remember mysqldump, the original tool included in MySQL? The good thing about mysqldump is that you can actually read and even edit the output of the backup before potentially restoring data, which can prove interesting during development work. mysqldump’s biggest negative is that it’s not scalable, nor fast for backing up large amounts of data. Additionally, restoring data is even slower as you must replay the complete dataset on your new MySQL database servers (rebuild indexes, large IO, etc.). mysqldump’s advantages include the convenience and flexibility of viewing or even editing the output before restoring. It gives you the ability to clone databases for development, and produce slight variations of an existing database for testing. mydumper. This tool is comparable to mysqldump, however it does it in parallel, which provides significant benefits in backup time and restoration time. Binary backups Binary backups refers to copies made of the entire MySQL dataset. Binary backups are typically faster compared to logical backups, especially  on larger datasets. Several tools come to mind in these cases. Percona Xtrabackup. An opensource binary backup solution for InnoDB. The good thing about XtraBackup is that it is non-locking when using MySQL with the InnoDB storage engine.  MySQL Enterprise Backup. An InnoDB hot backup so[...]

Planet MySQL Business Continuity and MySQL Backups This blog post discusses the business continuity plan around MySQL backups, and how organizations should think about them. During the years I’ve worked in IT, I’ve learned that backups sometimes are a conceptual subject in organizations. Many companies have them, but don’t document the associated business continuation plan for them. I experienced this the hard way many many years ago, somewhere around when MySQL 5.0 was still widely used. In most organizations, there are a couple of business continuity subjects that should be described internally. For example, what is the recovery time objective and what is the recovery point objective. Let’s go a bit deeper into both concepts: Recovery Point Objective: A recovery point objective describes the utter limit of time data can be lost during a major incident. For example, recovery while a massive data center failure happens. One of the questions you should ask prior to these situations is what is a tolerable time point for lost information?  If you have a recovery point objective of over a day, your daily backup routines might cover this. However, if you have a recovery point objective that is more stringent, you might be forced to have some additional tools like binary streaming or incremental backup. Recovery Time Objective This second term and concept is also essential in building a business continuity plan. Your environment has to remain active to generate traffic and, potentially, revenue. What are the requirements promised to your customers? Are there any SLA’s described with the customer, or is it best effort? If it’s best effort, what would be the tipping point for your users to start using an alternative service from your competitor. These are all factors to consider while determining your RTO. In Short If the recovery point objective and recovery time objective are stringent, this might mean additional costs might be required when buying hardware, or perhaps having a secondary data center becomes mandatory. However, it’s a cost/value discussion: what makes your company lose revenue, and what is acceptable during a crisis? Based on your business continuity requirements, you can potentially build your DR plans. Make sure your business continuity requirements builds the DR plan, and not vice versa. What tools do you have at your disposal to create sensible MySQL backups? Logical backups MySQLdump. Remember mysqldump, the original tool included in MySQL? The good thing about mysqldump is that you can actually read and even edit the output of the backup before potentially restoring data, which can prove interesting during development work. mysqldump’s biggest negative is that it’s not scalable, nor fast for backing up large amounts of data. Additionally, restoring data is even slower as you must replay the complete dataset on your new MySQL database servers (rebuild indexes, large IO, etc.). mysqldump’s advantages include the convenience and flexibility of viewing or even editing the output before restoring. It gives you the ability to clone databases for development, and produce slight variations of an existing database for testing. mydumper. This tool is comparable to mysqldump, however it does it in parallel, which provides significant benefits in backup time and restoration time. Binary backups Binary backups refers to copies made of the entire MySQL dataset. Binary backups are typically faster compared to logical backups, especially  on larger datasets. Several tools come to mind in these cases. Percona Xtrabackup. An opensource binary backup solution for InnoDB. The good thing about XtraBackup is that it is non-locking when using MySQL with the InnoDB storage engine.  MySQL Enterprise Backup. An InnoDB hot backup so[...]
lution that is included in the subscription level of MySQL enterprise.  These tools can offer you incremental and daily backups, however they still don’t bring you point-in-time recovery. If your recovery point objective is very limited, it might mean that that you require to externally store (backup) your binary logs and replay them on your restored database. Keep in mind that this factor potentially impacts your recovery time objective. Delayed Slaves This concept is not a backup, but this technology might help you to recover your database and limit the recovery time significantly. Conclusion We’ve discussed having a business continuity requirement list, and some potential tools that might assist you in covering them (at least on the MySQL level). One of the last items that is important is actual testing. The number of companies that require data recovery and then notice that their backups are corrupted are way too numerous. Make sure your organization tests their backups regularly. Are you sure they work properly? Make sure that you perform regression tests for new code – for example on a restoration set of the backups. If you make sure you trust your backups, you might sleep better at night!   ;-).

KOT
03.12.2016
06:50:33
А чё бота так и не поменяли? Одни простыни в треде

Egor
03.12.2016
17:40:44
А чё бота так и не поменяли? Одни простыни в треде
Поменял, этот также дёргает полное описание

KOT
03.12.2016
17:41:12
Пускай только ссылку дёргает, без текста

Egor
03.12.2016
17:41:33
Нельзя это настроить, к сожалению

Feed Reader Bot
03.12.2016
21:13:43
Planet MySQL MySQL Support Engineer's Chronicles, Issue #4 This week I had to deal with some unusual problems. But let me start with Percona's xtrabackup, software that I consider a key component of many current production MySQL setups and use regularly. Recently new minor versions of XtraBackup were released, check the details on 2.4.5, for example. It made a step towards support of MariaDB 10.2, but it's still a long way to go, see this pull request #200.My main problem with xtrabackup, though, is not with lack of support of MariaDB 10,2-specific features. Why should they care, after all... The problem is that old well known bugs and problems are not resolved, those that may affect all MySQL versions, forks and environments. Check lp:1272329 , "innobackupex dies if a directory is not readable. i.e.: lost+found", for example. Why not to read and take into account ignore_db_dir option (as server does) and let those poor souls who used mount point as a datadir to make backups? Check even older problem, passwords that are not hidden in the command lines, see lp:907280, "innobackupex script shows the password in the ps output, when its passed as a command line argument". My colleague Hartmut even suggested the fix recently, see pull request #289.Because of these old, known problems (some of them being low hanging fruits) that are not fixed users still suffer while using xtrabackup way more often than they would like to. One day, as a result, they'll have to switch to some other online backup tools or approaches. One may dream about extended myrocks_hotbackup to cover InnoDB one day (when MyRocks and InnoDB will work together in one instance), or just use Percona TokuBackup (after adding script to go SST for Galera with it, maybe), or try something totally different. Anyway, I feel that if more bugs (including low hanging fruits) in xtrabackup are not getting fixed and pull requests are not actively accepted, the tool may become much less relevant and used soon.I had to deal with MaxScale-related issues this week, so I'd like to remind those who use Docker for testing about https://github.com/asosso/maxscale-docker/blob/master/Dockerfile. Personally I prefer to build from source. In any case, I'd like us all to remember that in older versions one may have to set strip_db_esc option explicitly for service to deal with database names containing underscore (_). Recent 2.0.x versions have it enabled by default (see MXS-801).I also had to explain how online ALTER TABLE works, specifically, when it sets exclusive metadata locks in the process. I still do not see this topic properly explained in the manual, so I had to report Bug #84004, "Manual misses details on MDL locks set and released for online ALTER TABLE".By no means I am a developer for 11 years already, even less one should expect writing Java code from me. Anyway, I had to explain how to replace Oracle's ref_cursors (a.k.a cursor variables) in MySQL, both in stored procedures and in Java code that calls them. If you are wondering what is this about, check this fine manual. Note that this feature is missing in MySQL, even though it was suggested to implement it here. In general, MySQL allows just to run SELECTs in stored procedures and then in Java you can process each of the result sets returned any way you want. Things may get more complicated when more than one result set is produced, and they are even more complicated in Oracle with nested cursor expressions. So, I plan to devote a separate blog post to this topic one day. Stay tuned.I answer questions coming not only from customers. Old friends, community users out of nowhere and, even more, colleagues are welcomed to discuss whatever MySQL- or MariaDB-related  problem they may have. If I know how to help, I'll do thi[...]

Planet MySQL MySQL Support Engineer's Chronicles, Issue #4 This week I had to deal with some unusual problems. But let me start with Percona's xtrabackup, software that I consider a key component of many current production MySQL setups and use regularly. Recently new minor versions of XtraBackup were released, check the details on 2.4.5, for example. It made a step towards support of MariaDB 10.2, but it's still a long way to go, see this pull request #200.My main problem with xtrabackup, though, is not with lack of support of MariaDB 10,2-specific features. Why should they care, after all... The problem is that old well known bugs and problems are not resolved, those that may affect all MySQL versions, forks and environments. Check lp:1272329 , "innobackupex dies if a directory is not readable. i.e.: lost+found", for example. Why not to read and take into account ignore_db_dir option (as server does) and let those poor souls who used mount point as a datadir to make backups? Check even older problem, passwords that are not hidden in the command lines, see lp:907280, "innobackupex script shows the password in the ps output, when its passed as a command line argument". My colleague Hartmut even suggested the fix recently, see pull request #289.Because of these old, known problems (some of them being low hanging fruits) that are not fixed users still suffer while using xtrabackup way more often than they would like to. One day, as a result, they'll have to switch to some other online backup tools or approaches. One may dream about extended myrocks_hotbackup to cover InnoDB one day (when MyRocks and InnoDB will work together in one instance), or just use Percona TokuBackup (after adding script to go SST for Galera with it, maybe), or try something totally different. Anyway, I feel that if more bugs (including low hanging fruits) in xtrabackup are not getting fixed and pull requests are not actively accepted, the tool may become much less relevant and used soon.I had to deal with MaxScale-related issues this week, so I'd like to remind those who use Docker for testing about https://github.com/asosso/maxscale-docker/blob/master/Dockerfile. Personally I prefer to build from source. In any case, I'd like us all to remember that in older versions one may have to set strip_db_esc option explicitly for service to deal with database names containing underscore (_). Recent 2.0.x versions have it enabled by default (see MXS-801).I also had to explain how online ALTER TABLE works, specifically, when it sets exclusive metadata locks in the process. I still do not see this topic properly explained in the manual, so I had to report Bug #84004, "Manual misses details on MDL locks set and released for online ALTER TABLE".By no means I am a developer for 11 years already, even less one should expect writing Java code from me. Anyway, I had to explain how to replace Oracle's ref_cursors (a.k.a cursor variables) in MySQL, both in stored procedures and in Java code that calls them. If you are wondering what is this about, check this fine manual. Note that this feature is missing in MySQL, even though it was suggested to implement it here. In general, MySQL allows just to run SELECTs in stored procedures and then in Java you can process each of the result sets returned any way you want. Things may get more complicated when more than one result set is produced, and they are even more complicated in Oracle with nested cursor expressions. So, I plan to devote a separate blog post to this topic one day. Stay tuned.I answer questions coming not only from customers. Old friends, community users out of nowhere and, even more, colleagues are welcomed to discuss whatever MySQL- or MariaDB-related  problem they may have. If I know how to help, I'll do thi[...]
s, otherwise I'll quickly explain that I am of no good use. This is how I ended up testing MariaDB's CONNECT storage engine quickly as a replacement for the Federated engine, that is, to link table to a remote MySQL table. Basic instructions on how to set it up and use MySQL type looked simple, but when I tried to test on Fedora 23 and hit a problem of missing libodbc.so.1:MariaDB [(none)]> INSTALL SONAME 'ha_connect';ERROR 1126 (HY000): Can't open shared library '/home/openxs/dbs/mariadb10.1/lib/plugin/ha_connect.so'  (errno: 2, libodbc.so.1: cannot open shared object file: No such file or directory)the solution was not really straightforward. First of all I had to install unixODBC.x86_64 2.3.4-1.fc23 RPM, but it also does not provide libodbc.so.1:[openxs@fc23 node2]$ find / -name libodbc.* 2>/dev/null/usr/lib64/libodbc.so.2/usr/lib64/libodbc.so/usr/lib64/libodbc.so.2.0.0So, I had to apply a quick and dirty hack:[openxs@fc23 node2]$ sudo ln -s /usr/lib64/libodbc.so.2.0.0  /usr/lib64/libodbc.so.1As a result CONNECT engine worked as expected, as long as proper account and IP-address where used: MariaDB [test]> INSTALL SONAME 'ha_connect';Query OK, 0 rows affected (0.27 sec)MariaDB [test]> create table r_t2(id int primary key, c1 int) engine=connect table_type=mysql connection='mysql://msandbox:msandbox@127.0.0.1:23532/test/t';Query OK, 0 rows affected (0.04 sec)MariaDB [test]> select * from r_t2;                     +----+------+| id | c1   |+----+------+|  1 |    2 ||  2 |    3 |+----+------+2 rows in set (0.00 sec)From configuring MaxScale to work with database having underscore in the name to re-writing Java code that used to work with Oracle RDBMS for MySQL, with many missing details in the manuals and software bugs identified or reported in between, and all that with ongoing studies of performance problems and lost quorums, rolling upgrades and failed SSTs in Galera clusters - this is what support engineers here in MariaDB have to deal with during a typical working week.

Planet MySQL Speaking in December 2016 I neglected to mention my November appearances but I’ll just write trip reports for all this. December appearances are: ACMUG MySQL Special Event – Beijing, China – 10 December 2016 – come learn about Percona Server, MyRocks and lots more! A bit of a Japan tour, we will be in Osaka on the 17th, Sapporo on the 19th, and Tokyo on the 21st. A bit of talk of the various proxies as well as the various servers that exist in the MySQL ecosystem. Looking forward to discussing MySQL and its ecosystem this December!

Planet MySQL Tab Sweep – MySQL ecosystem edition Tab housekeeping but I also realise that people seem to have missed announcements, developments, etc. that have happened in the last couple of months (and boy have they been exciting). I think we definitely need something like the now-defunct MySQL Newsletter (and no, DB Weekly or NoSQL Weekly just don’t seem to cut it for me!). MyRocks During @scale (August 31), Yoshinori Matsunobu mentioned that MyRocks has been deployed in one region for 5% of its production workload at Facebook. By October 4 at the Percona Live Amsterdam 2016 event, Percona CEO Peter Zaitsev said that MyRocks is coming to Percona Server (blog). On October 6, it was also announced that MyRocks is coming to MariaDB Server 10.2 (note I created MDEV-9658 back in February 2016, and that’s a great place to follow Sergei Petrunia’s progress!). Rick Pizzi talks about MyRocks: migrating a large MySQL dataset from InnoDB to RocksDB to reduce footprint. His blog also has other thoughts on MyRocks and InnoDB. Of course, checkout the new site for all things MyRocks! It has a getting started guide amongst other things. Proxies: MariaDB MaxScale, ProxySQL With MariaDB MaxScale 2.0 being relicensed under the Business Source License (from GPLv2), almost immediately there was a GPLScale fork; however I think the more interesting/sustainable fork comes in the form of AirBnB MaxScale (GPLv2 licensed). You can read more about it at their introductory post, Unlocking Horizontal Scalability in Our Web Serving Tier. ProxySQL has a new webpage, a pretty active mailing list, and its the GPLv2 solution by DBAs for DBAs. Vitess Vitess 2.0 has been out for a bit, and a good guide is the talk at Percona Live Amsterdam 2016, Launching Vitess: How to run YouTube’s MySQL sharding engine. It is still insanely easy to get going (if you have a credit card), at their vitess.io site.

Planet MySQL Taking Full Advantage of Galera Multi-Master Webinar – Dec 13th Description This webinar will be the second in our series on best practices to follow when using Galera Cluster.In this part, we will discuss important topics related to multi-master setups: * Practical considerations when using Galera in a multi-master setup * Evaluating the characteristics of your database workload * Preparing your application for multi-master * Detecting and dealing with transaction conflictsThis is a webinar presented by Codership, the developers and experts of Galera Cluster.DATE AND TIME FOR USA:  Tuesday, 9-10 AM PST (Pacific ), 13th of DecemberREGISTER TO USA WEBINAR WITH THIS LINK   DATE AND TIME FOR EMEA:  Tuesday, 11-12 AM EEST (Eastern European Time), 13th of DecemberREGISTER TO EMEA WEBINAR WITH THIS LINK  DURATION: 60 minutes with questions and answers.

Planet MySQL Importing InnoDB Partitions in MySQL 5.6 and MariaDB 10.0/10.1 Transportable tablespaces for InnoDB tables is a very useful feature added in MySQL 5.6 and MariaDB 10.0. With this new feature, an InnoDB table’s tablespace file can be copied from one server to another, as long as the table uses a file-per-table tablespace. Unfortunately, the initial transportable tablespace feature in MySQL 5.6 and MariaDB 10.0 does not support partitioned tables. ... Read More

Planet MySQL Using the MariaDB Audit Plugin with MySQL The MariaDB audit plugin is an audit plugin that is bundled with MariaDB server. However, even though it is bundled with MariaDB, the plugin is actually compatible with MySQL as well. In this blog post, I will describe how to install the plugin with MySQL. Install the plugin Unfortunately, neither MariaDB Corporation nor MariaDB Foundation currently distribute a standalone binary ... Read More

Planet MySQL Facebook MyRocks at MariaDB Facebook MyRocks at MariaDBspetrunia Tue, 12/06/2016 - 10:17 Recently my colleague Rasmus Johansson announced that MariaDB is adding support for the Facebook MyRocks storage engine. Today I’m going to share a bit more on what that means for MariaDB users. Members of the Facebook Database Engineering team helped us answer some questions we think our community will have about MyRocks.Benefits of MariaDB Server’s Extensible Architecture Before discussing specifics of MyRocks, new readers may benefit from a description of MariaDB Server architecture, which is extensible at every layer including the storage layer. This means users and the community can add functionality to meet unique needs. Community contributions are one of MariaDB’s greatest advantages over other databases, and a big reason for us becoming the fastest growing open source database in the marketplace. Openness in the storage layer is especially important because being able to use the right storage engine for the right use case ensures better performance optimization. Both MySQL and MariaDB support InnoDB - a well known, general purpose storage engine. But InnoDB is not suited to every use case, so the MariaDB engineering team is extending support for additional storage engines, including Facebook’s MyRocks for workloads requiring greater compression and IO efficiency, and MariaDB ColumnStore (currently in beta), which will provide faster time-to-insight with Massively Parallel Execution (MPP).Facebook MyRocks for MariaDB When searching for a storage engine that could give greater performance for web scale type applications, MyRocks was an obvious choice because of its superior handling of data compression and IO efficiency. Besides that, its LSM architecture allows for very efficient data ingestion, like read-free replication slaves, or fast bulk data loading. As we add support for new storage engines, many of our current users may ask, “What happens to MariaDB’s support for InnoDB? Do I have to migrate?” Of course not! We have no plans to abandon InnoDB. InnoDB is a proven storage engine and we expect it to continue to be used by MariaDB users. But we do expect that deployments that need highest possible efficiency will opt for MyRocks because of its performance gains and IO efficiency. Over time, as MyRocks matures we expect it will become appropriate for even more use cases. The first MariaDB version of MyRocks will be available in a release candidate of MariaDB Server 10.2 coming this winter. Our goal is for MyRocks to work with all MariaDB features, but some of them, like optimistic parallel replication, may not work in the first release. MariaDB is an open source project that follows the "release often, release early" approach, so our goal is to first make a release that meets core requirements, and then add support for special cases in subsequent releases. Now let’s move onto my discussion with Facebook’s Database Engineering team! Can you tell us a bit about the history of RocksDB at Facebook? In 2012, we started to build an embedded storage engine optimized for flash-based SSD, by forking LevelDB. The fork became RocksDB, which was open-sourced on November 2013 [1] . After RocksDB proved to be an effective persistent key-value store for SSD, we enhanced RocksDB for other platforms. We improved its performance on DRAM in 2014 and on hard drives in 2015, two platforms with production use cases now. Over the past few years, we've introduced numerous features and improvements. To name a few, we built compaction filter and merge operator in 2013, backup and column families in 2014, transactions and bulk loading in 2015, and persistent cache in 2016. See the list of features that are not in LevelDB: https://github.com/faceboo[...]

Planet MySQL Facebook MyRocks at MariaDB Facebook MyRocks at MariaDBspetrunia Tue, 12/06/2016 - 10:17 Recently my colleague Rasmus Johansson announced that MariaDB is adding support for the Facebook MyRocks storage engine. Today I’m going to share a bit more on what that means for MariaDB users. Members of the Facebook Database Engineering team helped us answer some questions we think our community will have about MyRocks.Benefits of MariaDB Server’s Extensible Architecture Before discussing specifics of MyRocks, new readers may benefit from a description of MariaDB Server architecture, which is extensible at every layer including the storage layer. This means users and the community can add functionality to meet unique needs. Community contributions are one of MariaDB’s greatest advantages over other databases, and a big reason for us becoming the fastest growing open source database in the marketplace. Openness in the storage layer is especially important because being able to use the right storage engine for the right use case ensures better performance optimization. Both MySQL and MariaDB support InnoDB - a well known, general purpose storage engine. But InnoDB is not suited to every use case, so the MariaDB engineering team is extending support for additional storage engines, including Facebook’s MyRocks for workloads requiring greater compression and IO efficiency, and MariaDB ColumnStore (currently in beta), which will provide faster time-to-insight with Massively Parallel Execution (MPP).Facebook MyRocks for MariaDB When searching for a storage engine that could give greater performance for web scale type applications, MyRocks was an obvious choice because of its superior handling of data compression and IO efficiency. Besides that, its LSM architecture allows for very efficient data ingestion, like read-free replication slaves, or fast bulk data loading. As we add support for new storage engines, many of our current users may ask, “What happens to MariaDB’s support for InnoDB? Do I have to migrate?” Of course not! We have no plans to abandon InnoDB. InnoDB is a proven storage engine and we expect it to continue to be used by MariaDB users. But we do expect that deployments that need highest possible efficiency will opt for MyRocks because of its performance gains and IO efficiency. Over time, as MyRocks matures we expect it will become appropriate for even more use cases. The first MariaDB version of MyRocks will be available in a release candidate of MariaDB Server 10.2 coming this winter. Our goal is for MyRocks to work with all MariaDB features, but some of them, like optimistic parallel replication, may not work in the first release. MariaDB is an open source project that follows the "release often, release early" approach, so our goal is to first make a release that meets core requirements, and then add support for special cases in subsequent releases. Now let’s move onto my discussion with Facebook’s Database Engineering team! Can you tell us a bit about the history of RocksDB at Facebook? In 2012, we started to build an embedded storage engine optimized for flash-based SSD, by forking LevelDB. The fork became RocksDB, which was open-sourced on November 2013 [1] . After RocksDB proved to be an effective persistent key-value store for SSD, we enhanced RocksDB for other platforms. We improved its performance on DRAM in 2014 and on hard drives in 2015, two platforms with production use cases now. Over the past few years, we've introduced numerous features and improvements. To name a few, we built compaction filter and merge operator in 2013, backup and column families in 2014, transactions and bulk loading in 2015, and persistent cache in 2016. See the list of features that are not in LevelDB: https://github.com/faceboo[...]
k/rocksdb/wiki/Features-Not-in-LevelDB . Early RocksDB adopters at Facebook such as the distributed key-value store ZippyDB [2], Laser [2] and Dragon [3] went into production in early 2013. Since then, many more new or existing services at Facebook started to use RocksDB every year. Now RocksDB is used in a number of services across multiple hardware platforms at Facebook. [1] https://code.facebook.com/posts/666746063357648/under-the-hood-building-and-open-sourcing-rocksdb/ and http://rocksdb.blogspot.com/2013/11/the-history-of-rocksdb.html [2] https://research.facebook.com/publications/realtime-data-processing-at-facebook/ [3] https://code.facebook.com/posts/1737605303120405/dragon-a-distributed-graph-query-engine/ Why did FB go down the RocksDB path for MySQL? MySQL is a popular storage solution at Facebook because we have a great team dedicated to running MySQL at scale that provides a high quality of service. The MySQL tiers store many petabytes of data that have been compressed with InnoDB table compression. We are always looking for ways to improve compression and the LSM algorithm used by RocksDB has several advantages over the B-Tree used by InnoDB. This led us to MyRocks: RocksDB is a key-value storage engine. MyRocks implements that MySQL storage engine API to make RocksDB work with MySQL and provide SQL functionality. Our initial goal was to get 2x more compression from MyRocks than from compressed InnoDB without affecting read performance. We exceeded our goal. In addition to getting 2x better compression, we also got much lower write rates to storage, faster database loads, and better performance. Lower write rates enable the use of lower endurance flash, and faster loads simplify the migration from MySQL on InnoDB to MySQL on RocksDB. While we don't expect better performance for all workloads, the way in which we operate the database tier for the initial MyRocks deployment favors RocksDB more than InnoDB. Finally, there are features unique to an LSM that we expect to support in the future, including the merge operator and compaction filters. MyRocks can be helpful to the MySQL community because of efficiency and innovation. We considered multiple write-optimized database engines. We chose RocksDB because it has excellent performance and efficiency and because we work directly with the team. The MyRocks effort has benefited greatly from being able to collaborate on a daily basis with the RocksDB team. We appreciate that the RocksDB team treats us like a very important customer. They move fast to make RocksDB better for MyRocks. How was MyRocks developed? MyRocks is developed by engineers from several locations across the globe. The team had the privilege to work with Sergey Petrunia right from the beginning, and he is based in Russia. At Facebook's Menlo Park campus, Siying Dong leads RocksDB development and Yoshinori Matsunobu leads the collaboration with MySQL infrastructure and data performance teams. From the Seattle office, Herman Lee worked on the initial validation of MyRocks that gave the team the confidence to proceed with MyRocks for our user databases as well as led the MyRocks feature development. In Oregon, Mark Callaghan has been benchmarking all aspects of MyRocks and RocksDB, which has helped developers prioritize performance improvements and feature work. Since the rollout began, the entire database engineering team has been helping to make MyRocks successful by developing high-confidence testing, improving MySQL rollout speed, and addressing other issues. At the same time, the MySQL infrastructure and data performance teams worked to adapt our automation around MyRocks. What gave Facebook the confidence to move to MyRocks in production? Much of our early testing with the new storage engine was running the Linkbench benchmark used to simulate Facebook's social graph workload. While these resul[...]

k/rocksdb/wiki/Features-Not-in-LevelDB . Early RocksDB adopters at Facebook such as the distributed key-value store ZippyDB [2], Laser [2] and Dragon [3] went into production in early 2013. Since then, many more new or existing services at Facebook started to use RocksDB every year. Now RocksDB is used in a number of services across multiple hardware platforms at Facebook. [1] https://code.facebook.com/posts/666746063357648/under-the-hood-building-and-open-sourcing-rocksdb/ and http://rocksdb.blogspot.com/2013/11/the-history-of-rocksdb.html [2] https://research.facebook.com/publications/realtime-data-processing-at-facebook/ [3] https://code.facebook.com/posts/1737605303120405/dragon-a-distributed-graph-query-engine/ Why did FB go down the RocksDB path for MySQL? MySQL is a popular storage solution at Facebook because we have a great team dedicated to running MySQL at scale that provides a high quality of service. The MySQL tiers store many petabytes of data that have been compressed with InnoDB table compression. We are always looking for ways to improve compression and the LSM algorithm used by RocksDB has several advantages over the B-Tree used by InnoDB. This led us to MyRocks: RocksDB is a key-value storage engine. MyRocks implements that MySQL storage engine API to make RocksDB work with MySQL and provide SQL functionality. Our initial goal was to get 2x more compression from MyRocks than from compressed InnoDB without affecting read performance. We exceeded our goal. In addition to getting 2x better compression, we also got much lower write rates to storage, faster database loads, and better performance. Lower write rates enable the use of lower endurance flash, and faster loads simplify the migration from MySQL on InnoDB to MySQL on RocksDB. While we don't expect better performance for all workloads, the way in which we operate the database tier for the initial MyRocks deployment favors RocksDB more than InnoDB. Finally, there are features unique to an LSM that we expect to support in the future, including the merge operator and compaction filters. MyRocks can be helpful to the MySQL community because of efficiency and innovation. We considered multiple write-optimized database engines. We chose RocksDB because it has excellent performance and efficiency and because we work directly with the team. The MyRocks effort has benefited greatly from being able to collaborate on a daily basis with the RocksDB team. We appreciate that the RocksDB team treats us like a very important customer. They move fast to make RocksDB better for MyRocks. How was MyRocks developed? MyRocks is developed by engineers from several locations across the globe. The team had the privilege to work with Sergey Petrunia right from the beginning, and he is based in Russia. At Facebook's Menlo Park campus, Siying Dong leads RocksDB development and Yoshinori Matsunobu leads the collaboration with MySQL infrastructure and data performance teams. From the Seattle office, Herman Lee worked on the initial validation of MyRocks that gave the team the confidence to proceed with MyRocks for our user databases as well as led the MyRocks feature development. In Oregon, Mark Callaghan has been benchmarking all aspects of MyRocks and RocksDB, which has helped developers prioritize performance improvements and feature work. Since the rollout began, the entire database engineering team has been helping to make MyRocks successful by developing high-confidence testing, improving MySQL rollout speed, and addressing other issues. At the same time, the MySQL infrastructure and data performance teams worked to adapt our automation around MyRocks. What gave Facebook the confidence to move to MyRocks in production? Much of our early testing with the new storage engine was running the Linkbench benchmark used to simulate Facebook's social graph workload. While these resul[...]
ts were promising, we could not rely completely on them to make a decision. In order for MyRocks to be compelling for our infrastructure, MyRocks needed to reduce space and write rates by 50% compared with InnoDB on production workloads. Once we supported enough features in MyRocks, we created a MyRocks test replica from a large production InnoDB server. We built a tool to duplicate the read and write traffic from the production InnoDB server to the MyRocks test replica. Compared with compressed InnoDB, we confirmed that MyRocks used half the space and reduced the storage write rate by more than half while providing similar response times for read and write operations. We ran tests where we consolidated two InnoDB production servers onto a single MyRocks server and showed that our hardware can handle the double workload. This was the final result we needed to show that MyRocks is capable of reducing our server requirements by half and gave us the confidence that we should switch from InnoDB to MyRocks. What approach did Facebook take for deploying MyRocks in production? Moving to a new storage engine for MySQL comes with some risk and requires extensive testing and careful planning. Starting the RocksDB deployment with our user databases that store the social graph data may seem counterintuitive. However, the team chose to go this route because of two mutually reinforcing reasons: Based on benchmark and production experiments, the efficiency gains were significant enough and proportional to the scale of the deployment The workload on our user database tier is relatively simple, well known, and something our engineering team could easily reason about as most of it comes from our TAO Cache. The benefits we expect as well as further details on the MyRocks project can be found in Yoshinori's post. Both MyRocks and MariaDB are open source projects that are made stronger with community involvement. How will it help MyRocks when MariaDB releases a supported version? How would you like to see the community get more involved? We expect MyRocks to get better faster when it is used beyond Facebook. But for that to happen it needs to be in a distribution like MariaDB Server that has great documentation, expert support, a community, and many power users. The community brings more skills, more use cases, and more energy to the MyRocks effort. We look forward to getting bug reports when MyRocks doesn't perform as expected, feature requests that we might have missed, and pull requests for bug fixes and new features. I am most excited about attending conference talks about MyRocks presented by people who don't work at Facebook. While I think it is a great storage engine, the real test is whether other people find it useful — and hopefully useful enough that they want to talk and write about it.   Storage Engines Recently Rasmus Johansson announced that MariaDB is adding support for the Facebook MyRocks storage engine. In this blog members of the Facebook Database Engineering team helped us answer some questions we think our community will have about MyRocks. Login or Register to post comments

KOT
06.12.2016
17:05:39
Не, отрубай его нахуй, спам и ничего кроме

Egor
06.12.2016
18:53:08
Окей

/remove@TheFeedReaderBot

Feed Reader Bot
06.12.2016
18:53:12
Select the feed to remove, or /cancel

Egor
06.12.2016
18:53:22
/list@TheFeedReaderBot

Feed Reader Bot
06.12.2016
18:53:22
Please type the number of the feed you want to remove, or /cancel

Egor
06.12.2016
18:53:26
1

Feed Reader Bot
06.12.2016
18:53:35
Feed successfully removed

 
Страница 1 из 75